• About

Software For Profit

~ How to Build Incredible Software

Software For Profit

Category Archives: Technique

What The Big Data Revolution Teaches Us About The Innovating Organization

12 Saturday Oct 2019

Posted by Kyle in agile, Management, Philosophy, Technique

≈ Leave a comment

Tags

agile, influence, leadership

One of the fascinating technical advancements that’s been happening recently has been around distributed data, and distributed data processing. The whole “Big Data” revolution rode in on the back of Hadoop. Hadoop is a tool that used a simple paradigm, the idea of combining two logical set operations (viz. map and reduce) into one clean way to heavily parallelize processing of massive sets of data. Now given, the data and the problem have to be a fit for the solution – and not all data and problems are. But none-the-less, if one’s problem space can be fit to the solution shape, the payoff is huge. There are newer techniques that follow the same broad pattern, but the underlying principle is that problems are split into pieces that don’t rely on each other, and can thus be run in or out of order, on the same computer or a different one, while still eventually arriving at meaningful results.

Data storage has been similarly advancing, largely along the same lines. Several years ago – the line of “no-sql” tools started to crop up offering different technical trade-offs that weren’t available when we relied solely on relational databases for our production data storage. Most interestingly – they allowed for a thing called “eventual consistency”. The dogma throughout my early years as a software geek was that data should always be what it’s supposed to be – it should be – in the fastest way possible – completely and utterly consistent. This turns out to be unnecessary for large swaths of problems. And so our beliefs about this and thus the tools we build for ourselves evolve to support it. The general outcome is that if we say that our database can be running on a number of computers, geographically dispersed, and they don’t even have to be immediately consistent, suddenly storing that data becomes far less burdensome. We can scale by throwing hardware at the problem – so that we don’t have to spend so much expensive thinking time on making the data storage as fully optimal as possible. And now we are commonly in the range of storing more than a Petabyte of data.

I want to highlight that just like with Hadoop and distributed processing, the distributed storage problem has to match the solution shape. Eventual consistency (in this case – and please forgive the radical simplification I’m doing to keep this blog post under a Petabyte itself) has to be ok.

And this is the big idea – the thing that big data teaches us about team organization:

In order to scale out massively, we have to place particular limits on the shape of the problem we are solving.

Demanding 100% control over our data – so that we always know it is entirely consistent – is comforting in a way. But it demands a lot of things about the technical structure of the database. You can’t parallelize operations, you can’t do them out of order, you can’t distribute them. The benefit requires a trade off in restriction. Conversely, if you shape your application and your business logic such that it is useful regardless the immediate consistency of the data, you can scale broadly.

On the data processing side – if we serialize processing, depend on the results of previous operations, and share state, we limit ourself to nearly one thread of processing, with shared, controlled memory to hold state, and scaling is something we can only do by adding more power to the single computer.

In general we give up immediate control for immediate chaos – with an understanding that the answer will emerge from the chaos, because of how we’ve shaped it.

With a computer it’s easy to see why this is true, memory and processing on a single computer, or a tightly coordinated few computers has an upper limit. It always will, though that limit will increase slightly over long periods of time. Designing a solution that can arbitrarily scale to any number of computers makes us only limited by the number of computers we can get a hold of, which is a completely different, and much easier ceiling to push up.

With humans, it’s even worse, until we make a breakthrough and figure out how to implant the cybernetic interfaces directly into our brains, our capacity has been the same for thousands of years. And more discouragingly it will remain the same for many years to come. We can not buy a more powerful processor; we cannot add memory. Some models come installed with a trivial, incrementally larger amount of one or the other or both, but it’s not enough to really make a difference.

To solve large problems, you need more than one person. To solve problems requiring creativity and human ingenuity – humans need autonomy to explore with their intuition. This means that every human being involved in solving the problem will have to saturate their ability to think. No matter how smart the genius – it is a significant stunting of effort, if we pretend like they can serialize all the thinking through themselves. Even if it’s a couple or maybe a small team of geniuses that are tightly coordinated. To really achieve anything great in a modern software organization – or rather, in a modern information-work-based organization – and actually that’s EVERY organization – the thinking has to be fully distributed.

For the EXACT same reason as it has to be with distributed computing and distributed storage. The problem size has a limit if we want full control. It doesn’t if we can allow the solution to emerge from the chaos of coordinated but distributed activity.

And just like the distributed example above, this simply means we have to choose to forge our problems to match the solution shape. Organizationally this means at every level realizing that the leadership and initiative necessary to drive something to completion – the thing that drive us so quickly to top-down, command-and-control structure – has to be embedded in every doer. That positional leaders in the organization should be few and should be “Clock Builders” – to use Jim Collins’ brilliant phrase – shaping the system and providing just enough structure so things don’t fly apart.

And when we get down to the software – the teams have to be shaped so that there are no dependencies on one team to another. Not for architecture, database programming, testing or moving something into production. The small software team that works together day to day, to fully distribute the innovation and creation that we humans are designed so well for, should have everything it needs to handle the software from cradle to grave – including but not limited to – the necessary authority and skill sets – with every team member possessing the leadership and initiative to drive anything through, even if the rest of the organization burned to the ground.

Failure to distribute our creativity and innovation in this way will rightly be viewed as just as backwards as a software development team committing to only storing megabytes of data in a fully consistent relational database is today.

So let’s get organized and build some amazing software!!

Kyle

The one thing you need to do to build great software….

28 Saturday Sep 2019

Posted by Kyle in agile, Management, Technique

≈ 3 Comments

The Great Convergence: Part II

16 Monday Sep 2019

Posted by Kyle in Philosophy, SOLID, Technique

≈ Leave a comment

The Single Responsibility Principle.  It’s my desert island principle.  If I had my brain cleansed of all software wisdom except for one thing – it would be the good ol’ SRP.  Every software creator should seek to have a tight grasp of this principle.  It provides a huge amount of clarity in exchange for the simple task of thinking about things in this particular way.

And that is, of course, that every thing (method, function, class, package, etc..) should have only one reason to change – it should have a single responsibility.  I won’t re-hash the entire principle write-up that I linked to above – but it creates clarity when you read the code because your brain doesn’t have to shift contexts as it looks at a single thing, and when you go to change code, you don’t risk changing a responsibility you’re not interested in changing.

The SOLID principles were originally, frequently referred to as Principles of Object Oriented Development.  And they are that.  But they are actually deeper than that – they are Principles of Software Development that apply equally to any development paradigm.  Of particular interest, in addition to OOP, because of its impact on the current state-of-the-art is Functional Programming.

These two paradigms, OOP and FP, are often pitched as being radically different.  They aren’t – they stem from the same need, take two different paths, but arrive ultimately in the same place.

I often say that there are two audiences that you write your software for; the user, and the reader.  Making a computer do something useful these days is relatively straightforward.  The art – the thing that distinguishes the true Craftsman from the amateur – is how you treat the second audience – the reader of the code.  And how one reads and understands code is as subjective, if not more so than how one uses the functioning system.  It’s an exercise in aesthetics.  It is understanding vocabulary and idioms, and it’s making intent and reasoning clear.  When all these pieces come together, the reader has a positive emotional response and is able to understand and modify the code quickly.

Making the intent of the reasoning clear means making concepts crisp and logic obvious.  And using abstraction to eliminate unnecessary details (because we’re limited in our ability to store information and reason about it).  If we didn’t have to worry about humans understanding and reasoning about our code, we could write it all in binary – there would be no reason to bother with  higher level languages.  They’re an accommodation for our inherent weakness.

Human reasoning happens on a number of levels – from levels that we are explicitly aware of, and can consciously arrange syllogisms to get to the result we want – to levels that are so unconscious, that we barely even know we’re doing (though we might be able to feel ourselves doing).  We romanticize the unconscious reasoning – because can look and feel like magic – because we’re pulling answers out of a hat without being able to describe how we got to them.  We tend to think of the more conscious versions of our reasoning as more scientific, more rational.  The truth is they’re both very powerful and useful in different ways – some of us balance between the two, some of us have a severe tendency toward one or the other.

In software development – these two styles are approached with either of these two high-level paradigms we’ve mentioned: OOP with its mantras about nouns that match the real world, having things that do things, is an approach that appeals to the unconscious style of reasoning – more purely to intuition.  And FP, with its mantras about equational reasoning and functional purity appeal to explicit rationale – to the conscious style of reasoning.

The great part is that our consciousness of the reasoning we’re performing doesn’t change the nature of the reasoning.  We abstract and then draw causal connection and then deduce the outcome.  Subconscious reasoning, with its speed, may sacrifice accuracy.  And conscious reasoning, with its accuracy, may sacrifice speed.  But the ultimate reality is it’s the same thing, just done differently by us.

The SOLID principles circumvent our mechanisms and are connected to the underlying reality of reasoning.  Since this is the case, they apply directly to any kind of programming we may do.  And further, the more we follow SOLID principles, the more our code is inherently rational.  So whether we start from a FP perspective and follow SOLID Principles or start from an OOP perspective and follow SOLID Principles – we end up in the same place.  Code that is inherently more rational, which will on the surface mean, OO code that starts to look like good FP code and FP code that starts to look like good OO code.

Anyway – back to SRP – step with me into the shoes of an OO programmer.  One of the articles of common wisdom held by OO programmers is that you’re supposed to “hide data behind behavior”.  That is, if you believe the textbooks on the topic, you might think it’s mandatory if you have a class, it should have state *and* behavior in that class.

But if we look closely at the SRP – and think about what it means to “have a single reason to change”.  We realize that a class having state and behavior, *always* has two reasons to change.  A class that has a Single Responsibility really only represents one or the other.

And further – a good class that’s doing only one thing, shouldn’t really have more than a method or two.  And two is pushing it.  Unless, maybe you’re using the class as a kind of package, in which case you might have related methods that do independent things.

And both of these apply across the board.  There are always edge case things, where the flexibility is nice to have – but if we’re really being honest with ourselves, we’re not really SRP if we don’t do these two things.

And if you look at it – if you follow SRP closely like this – immediately the code is easier to reason about.  But it also is surprisingly close to good FP code.  There’s no state, so the method’s result (barring depending on state elsewhere) is always only the product of it’s parameters.

FP and OO are converging – simply because they are both appeals to human rationality.  But they are inspired by different phases of the same reality.  SOLID pokes at the root of this and asks us – how do we really make great code that clearly displays intent, and gives the reader the best possible experience.

Happy Coding!

Kyle

Killing Software Development Capacity, Part 1: The Performance Review

12 Thursday Sep 2019

Posted by Kyle in agile, Management, Philosophy, Technique

≈ Leave a comment

I’ve been on both sides of this morbid practice with several different organizations at this point.  The similarities are striking everywhere I’ve been.  The process usually goes something like this:

  1. Set “goals” starting at the top of the organization
  2. Make those goals more specific as you move down the org chart
  3. For each individual
    1. Make sure that there’s awareness that financial incentives rest on meeting this
    2. Ostensibly allow for modifications to goals throughout the year
  4. At the end of the year (and sometimes throughout the year)
    1. “Calibrate” performance across all disciplines and parts of the organization
    2. Use the goals starting at the leaf nodes and working up to the top of the org as the baseline
  5. Individual managers rate their employees, but are held to the calibration done by the larger management team.  And the calibration is typically used to formulate a stack rank – a bell curve of performance that must be fit

As I noted above – there has been remarkable consistency in the different organizations I’ve been a part of with regards to the thinking about and implementation of this process.  I have to assume that there’s a common source of knowledge that leads to this – simply because the coincidence is a bit too stark.  Though, I am unaware of what the source is.

There are a lot of similarities, both in the form and the results of the practice between this system and the software development approach commonly known as Waterfall.  And as with Waterfall, I would concede that on the surface, it’s intuitively very appealing.  And I can imagine a lot of scenarios where it may be a very helpful technique.

Software Development is not one of these scenarios.

It cannot be, due to its nature.  I will do my best here to spell this out specifically for software development, though I imagine that the arguments apply equally to any information work.

Software development is a learning activity.  It tends to get conflated in the popular mind with the act of typing in a computer program (“coding”).  The act of actually entering the code is the least time consuming, least important part of developing software to meet a human need.

Understanding the problem space and the logical bounds that must be worked within, and how one might put the two together and model the specific solution is the bulk of the work.  Because of our cognitive limitations, that will always involve entering some code, seeing how well it serves the problem, and how clearly it reflects a model of the problem, fixing whatever we weren’t able to get right the first time and trying again.

That is, software development is always experimental, and the real value that is created as a part of that experimentation is learning;  learning that is reflected in the individual, and in the organization, and also in the software code itself.

The Appeal

The appeal of the system laid out above is two-fold.  And it is based on some undoubtedly real needs.

Appeal #1: Organizational Leaders want to be able to set direction.  That is – at every level the management team has a real desire to be able to feel comfortable that they’re driving the people working for them in a direction that is consistent with the direction that has been laid out for them.  The filtering down, with further specification of goals is a neat, clean way to do this, that almost seems to remove the messy human stuff from the system.

Appeal #2: Organizational Leaders want to have the sense that the organization that is accountable to them is moving with urgency and with an appropriate amount of energy.  The financial awards system is seen as the carrot that can be consistently applied, again in a neat and clean way that removes as much controversy as possible.

Again, both of these appeals come from genuinely important functions of authentic leadership – to provide direction, and to generate the collective energy to get there.

The Problem

Seeing the set of problems here requires some Systems Thinking.  I’d recommend (re-)reading Senge’s seminal work on the topic “The 5th Discipline“.

Problem #1: The Impossibility of Planning Software – Anyone who has worked in software for more than a day has come to realize with the rest of the industry, that predicting precisely what software we will be working on even as little as a couple of weeks out into the future is close to impossible.  Unless there is an extensive artificial structure in place to prevent it – the software you build today (again, because software development is primarily learning) informs the software you build tomorrow;  tomorrow’s software affects that of the next day, etc.  The uncertainty multiplies the further you get into the future.  This is why Agile has emerged from the software development world.  The  only way to build software that really meets needs is to try something and adjust.

If as a software developer I predicted what software I would be working on just 6 months into the future, even in the broadest terms, I would need to adjust many times over.  That adjustment would need to be fed back into the goals that had come down, potentially adjusting those with the learning that has occurred, disrupting a ton of work that was done by a ton of folks to figure out what those goals were in the first place.  And almost precisely like the Waterfall methodology – the force of this inertia makes this a practical impossibility.

Practically what happens is that the goals that an individual software developer writes has nothing to do with the actual software they’re writing, so that any changes won’t be disruptive to the larger system.  Which, in case it’s not obvious, completely blows away the usefulness of this tool as a mechanism for setting direction.

Problem #2: Individual Focus – the goals for a specific individual are always, almost as a logical extension of the system, formulated only between the individual and their immediate supervisor.  This means that teams aren’t being built around the things that their members are ostensibly working toward; because one team member is fully unaware of what any other team member is most interested in accomplishing.  And because of this it is almost impossible to not send the message that how someone performs as an individual is the key expectation from the leadership team.  This was heartbreaking as a software development manager, because no matter how hard I would focus on building a team, the system wouldn’t have it.

Problem #3: Motivation – One of the primary appeals of this system is that it gives a nice, clean way “to motivate people”.  Attach money to whatever it is you want people to do, and voila, problem solved – people will produce whatever you want.  This betrays a couple deep misunderstandings of human nature.  Firstly, money is not a real motivator – people will be anxious to make more money until they feel they are being paid fairly (to which, accomplishing more things is only one solution – leaving is another).  Secondly, it presumes the organization to take on the initiative to motivate someone.  People can be demotivated by others, but naturally we want to accomplish – we want to make a difference.  Ironically, attempting to bribe someone into doing more is really introducing a significant demotivating factor.

This is multiplied when the things that are being incentivized by financial pressure are unrelated to the work that they are primarily interested in doing (an unfortunate outcome of Problem #1).

Problem #4: Manager Bottleneck – Unlike in prior times or in other types of work, in information work, the doers possess the specific knowledge required to make something happen.  There is no way that a single individual would be able to possess a team’s worth of knowledge about a particular piece of software that is to be delivered.  Yet if the manager is expected to drive the team in a particular direction using the goals as his steering wheel, it requires that he know exactly what every single job requires, and how to execute it.   And further, if he or she is not actively directing the work, because their goals will be tied to the execution of the goals of those under them, they will have a powerful pressure on them to review the output of their reports.

This is absolutely antithetical to a strong software delivery team, where the full team shares the knowledge and responsibility necessary to deliver software.  And it makes the software development manager a bottleneck to delivery.  Which, in addition to being a major limitation on all but the most junior teams, means that there will be wide variance from team to team based solely on the manager’s skill level. (And none of this accounts for the burnout that will be the manager’s lot for being put in this kind of a situation)

Further, any technological innovation, process improvements, or other enhancements that the team is interested in doing to the way it works all require up-front planning with the manager, since it will have to fit in as a part of goals.  Which means, that instead of being group exercises, they are necessarily top-down driven initiatives that will rise or fall on the manager’s say rather than on the collective voice of those doing the work.

Problem #4 is probably the most dire.  But like Problem #1, the vent that usually lets the pressure off is that the actual goals are placed on things that are less relevant and less important than the actual software that is being delivered.  And since there is more and more pressure adding up to this end, less and less value gets placed on the usefulness of the tool.  And ultimately ….

The Ultimate Outcome

Because the upper levels of management create a system that actively necessitates avoidance, reduces its own value in the eyes of those using it, while creating an illusion of control and motivation, the resulting cynicism should be unsurprising.

The Solution

I’m a firm believer that you shouldn’t whine about a problem without pointing to the solution.

The solution here is to – and I steal from Jim Collins here – recognize that there are no silver bullets in leading an organization.  Directing people toward a common goal is a messy, human situation that requires a deep commitment to communicating unceasingly, to really understanding your business and the way the people fit into it.  And most importantly it requires a deep understanding of how software delivery (and really information work) differs from the old style of work that didn’t fall apart as readily under a heavy top-down hand.

  1. Fully eliminate this idea (Performance Review) from any part of your organization that engages in software delivery.
  2. To meet the direction needs, re-evaluate the role of the software development leader.  Make it their role to work on the system – to formulate a delivery practice based on agile principles.
  3. Re-evaluate the manager to staff ratio.  The software development manager should have almost nothing to do with the specific software and everything to do with creating a healthy system.  Because of this the number of staff reporting to a single manager can be far higher.
  4. With this decreased need for management, flatten your organization – eliminate as many intermediate layers of management as possible, so that top leaders can be more readily in contact with doers.  So that feedback can REALLY flow back up to the leaders making the strategic decisions.
  5. Take all that money you’re saving – and really pay the doers what they are worth.

None of this is particularly exciting – especially because it involves a lot of communication.  Something, that if we asked people how they really feel, they might opine that it is not productive – know that it is.

It also eliminates a lot of positions of authority.  Authority is a thing that can attract some people.  Though, I would argue that it attracts the wrong kind of people.

Organizing like this will do nothing but lead to better software, happier people, and much more value being delivered to the world we serve.

Happy coding!

Kyle

The Generalization Myth

20 Tuesday Aug 2019

Posted by Kyle in Philosophy, SOLID, TDD, Technique

≈ Leave a comment

Generalization is beautiful and exciting – and offers many glorious health benefits.  It reduces the amount of code we have to write, and solves problems before they even arise.  With just a tiny amount of forethought – we can make any future work we do trivial, simply by achieving the right generalization.  It makes you more attractive to the opposite sex – and if done *just* right even grants you eternal life.

elsa-reaching-2

“I can almost reach it Indie”

No wonder, like Elsa in the Indiana Jones movie, we neglect our own lives to attain it.  “I can almost reach it, Indie”…

The Holy Grail, is of course, a myth.  Though it makes for a good tale – sprinkle in some Nazis, some betrayal, some family tension.  What a story.

In reality – the generalization that we all grasp for is also a myth.  There is no way to know ahead of time what generalization will meet all of (or even one of) our upcoming use-cases.  This is based mostly on the fact that we don’t know even what our upcoming use-cases are – let alone what kind of structure will be necessary to meet them.  And the generalization you choose ahead of time, if it doesn’t match what you need in the future, is wasteful, because it limits the moves you can make.  This is what generalization does, it limits expressive options to the abstraction that you select.  And since you can never know the generalization you need, this always results in negative ROI.  (Not as bad as falling into a bottomless pit, but still not exactly what we are going for)

The challenge is that it always SEEMS so obvious that it will result in nothing but advantage in the context we are working in, to generalize.  This instinct is good – if you let it push you toward moderate flexibility in design, and refactoring (after you’ve solved a use-case) to a more generalized structure.  Generalization that you arrive at AFTER you’ve learned what the use-cases will be (that is, after you’ve tested and coded them), and moderate flexibility in your design are both highly profitable.  But they are both the result of disciplined, after-the-fact thinking; not a result of the magical thinking that we can somehow avoid work if we divine the right generalization before-the-fact.

This is also another reason that before-the-fact generalization seems so appealing – because it appears to give us something for nothing.

lastcrusade

“Let it go…”

After-the-fact generalization that results in clean, easy to maintain code, that has a very positive return tends to seem simply like the diligence of a mature adult.  Obviously the former, while maybe not tied to reality, is far more Rock-n-Roll.

As mature Craftsmen – we should do like Mr. Jones, and listen to the advice of his dad – “let it go, Indie, let it go…”

Once we’ve let this temptation go, we can take the following methodical approach – which will satisfy our impulse to generalize, but do it in a way that will result in a powerful, positive outcome.

  1. Solve the use-case(s) at hand, directly, with the simplest possible code.  Use a test (or tests) to prove that you’re doing this.
  2. Solve with designs that are SOLID.  SOLID leads to flexibility – flexible systems are easier to change systems.
  3. Refactor: remove anything creating a lack of clarity, generalize where there is unnecessary duplication.
  4. Rinse and Repeat

If we do this we will be creating amazing software!

Happy Coding!

Kyle

 

 

The Superfluous Story Point

20 Thursday Jun 2019

Posted by Kyle in agile, Technique

≈ Leave a comment

The exercise of putting “point” values on stories is a hallowed one in Scrum circles.  And rightly so – it is a powerful exercise.  Because of its marked simplicity and the underlying wisdom it embodies – it yields three important fruits, while avoiding some common but deadly software-delivery traps.

Story Pointing is first and foremost a discussion around the details of creating a particular piece of software.  The Story Point (and I’m assuming some familiarity with this exercise here) is a metric of relative complexity.  A team giving itself the mandate to arrive at this metric will instantly create deep conversation around details.

This brings impressions about the impending implementation to the forefront of peoples’ minds, turns intuition into explicit discussion, and generally drives powerful group learning.

Secondly, it pushes toward understanding the amount of scope (e.g. what size of story) is meaningful to entertain in this kind of discussion.  So, for example, a story point value of over 60 (or whatever the number is for a specific team) may mean that the story needs to be broken down into smaller parts in order to have meaningful discussions around the details of the implementation.

And lastly, the number of points in a sprint can begin to give a rough prediction of future throughput.  This allows a certain degree of anticipation and planning to start happening for stakeholders.

It does all of this while avoiding setting unrealistic expectations (which happens a lot when estimating with time values), and while not assuming or mandating the specific individuals working on the story.

Story Pointing is awesome.  But what I really want to do with this post is to save you a little time and effort.  And I want to do this by suggesting something ostensibly radical, but that I believe if you look a little deeper is only the next logical progression.  I’d like to suggest that you…

Do the Story Pointing Exercise but get rid of the points.

Huh?  Have you finally lost it, Kyle?

No – well I don’t think so – but follow me on this for a sec…

The usual series of Point values available goes something like: 0, 1, 2, 3, 5, 8, 13, 20, 40, 100 and (Infinity).  Not quite Fibbonacci – but it captures the idea that the bigger something is the less we can think specifically about small steps in complexity.  Great – so far so good.  If something is “Infinity”, we need more information and it needs to be broken down to make sense out of it.  It’s easy to see how the exercise works; we assign points to stories.  And it’s easy to see how the advantages listed above follow.

Now what if we took the 100 and INF cards and threw them out, and just accepted as a new rule that if it’s more complex than a 40, we have to break it down smaller before we can make sense of it.  Does that meaningfully alter the advantages that we noted above?  No – discussion will still be driven, velocity predicted, and all without triggering any of the pitfalls.

Practically speaking, the last few teams that I’ve worked on – they went further.  We never really  used anything beyond 13.  And even 13 is typically looked upon skeptically, in terms of being able to analyze the story in a meaningful way.  So what if we throw out 20 and 13 as well?  Anything over an 8 needs to be broken down smaller.  Have we lost out on any of the advantages yet?

Before we go any further – I’d like to highlight that the act of breaking a story down is as potent in terms of driving conversation as putting complexity numbers on them.  If you need to understand the details as a group to put a complexity estimate on a story, you even more need to understand details to break a story smaller.

So – if we would have otherwise had any stories larger than an 8 – we will have broken them down, and thus driven the conversation around them to a greater degree than if we’d put those higher level points on them.  So not only have we not lost anything by reducing our potential point values to 0, 1, 2, 3, 5, 8…we’re actually getting into higher quality discussion because of the breakdown of the larger things.

And if we remove 8 – have we lost anything?  Nope – again we gain.

5? Same.

3? Same.

2? Same.

Now we’re down to 0 and 1.  0 is trivial – we know when something doesn’t involve work, and there’s no reason to talk about that. Which leaves us with 1….if something isn’t a 1 we break it down further until it is.

Our pointing exercise is now – what point value is it?  Oh it’s more than a 1 – let’s break it down again.

Though, that wording is confusing – I’d suggest we make a slight semantic transformation, and simply ask “is this a small, nearly trivial story, or is it not.”  If it’s not, we break it down further.

It follows – but to make it explicit – that velocity planning is now simply counting stories, because they only have one possible value.

The time-saver with this – is the up-front savings of not having to teach people about the story pointing exercise (if it’s a shop that hasn’t done scrum before).  And the ongoing savings of simply giving a thumbs-up/thumbs-down on every story when you groom, as opposed to trying to arrive at a point value and also break the story down.

And thus we have story pointing….without the points.  And with all the same advantages.  And streamlined for the new millenium.  Story-Pointing++, if you will.  All the same great taste, half the calories.  Ok, I’ll stop.

Here’s to writing great software!

Kyle

 

 

 

 

The Great Convergence: Part I

18 Saturday May 2019

Posted by Kyle in SOLID, Technique

≈ Leave a comment

In my journey as a software craftsman, I’ve learned a few things.  One of them is that software is an art – and an aspect of that art is the interesting tension of serving two distinct audiences.  Audiences whose interests sometimes, but don’t always, align.  I’m talking of course about the audience that puts your software to use, and the audience of builders that will come behind you and make your software do new things.

In the normal world of software – the second audience is the one most neglected.  Serving this audience, though, is the measure of true craft.  It shows that an individual has developed the ability to hear the quiet impulse that has driven the greats of the past to their heights of achievement – the lust to build something great, just because we can.  And it shows an ability to think beyond the instant gratification of a pat on the back from a boss, or relieving the pressure of a driven project manager.

The other thing that I’ve learned is that, as with any art, finding the underlying principles is generally a matter of trying a thing, observing the aesthetic quality of the result, and then judging based on that if you should use that thing again in the future.  The thing you try may come from a flash of your insight – though – as I like to say – good artists borrow, great artists steal (which, incidentally, I’m pretty sure I stole from someone).

Using that approach, I’ve applied the SOLID principles to my art and I’ve discovered an intensely aesthetically pleasing result – in terms of just looking and reading the resulting code, and the ability to adjust it and modify it as situations and needs change.  These results both directly apply to the two audiences mentioned above.

Recently, a wise craftsman brought an interesting aspect of one of the SOLID principles to my attention.  He pointed out that with regards to the Open-Closed Principle, if a class exposes a public member variable it is necessarily not Closed in the OCP sense.  This is because the purpose of a member variable is to be used by methods to keep state between method calls.  That is to say, the purpose of a member variable is to alter the behavior of a method — so the behavior of the method is no longer pre-determined – it can be changed  at an arbitrary point in time.

Now technically, if the class, and more specifically the particular member variable, is simply acting as a data, and there is no behavior dependent on the member variable then changing it doesn’t alter behavior.  But, practically speaking, the expectation of member variables is that they’re used by methods in the class.

I had personally always thought of being “closed for modification” to be meant in strictly a compile-time sense.  That is, “Closed” referred specifically to source code.  But as I thought about my friend’s assertion more, a question occurred to me.  What is SOLID good for?  Well returning to what I arrived at in an experimental fashion – it is good for making source code aesthetically pleasing and easy to change.  And then a second question occurred to me – how does OCP contribute to that?  It contributes by allowing the reader to ignore the internals of existing code, which brings focus to the overall structure and to the overarching message that it is communicating.  This is artistically more compelling and it makes the overall code-base easier to understand.

So I would suggest that changes at run-time as well as compile-time are important to eliminate in this respect.  And as such – OCP does in fact include run-time closure in “closed for modification”.

Having this settled in our mind – another interesting question arises.  Does “encapsulating” the change of the member variable by making it private and only modifying it through a method call make the class “closed”?  There are two differences between setting a member variable directly, and encapsulating it.  The first is that you don’t actually use an assignment operator.  But this doesn’t do anything to eliminate the fact that you’re changing the variable.  And the second is that you might potentially limit the values that the state may take on, and thus you potentially have a better idea about the nature of the behavior.  While this may be true, the fact that the state can change at an arbitrary time means that the internals of the class can no longer be ignored – since a given method may have more than one potential behavior.  This means that we clearly don’t have a “closed” class.

To take this just one step further – another thing that I’ve discovered is that the more SOLID I make my code, the more my OO code looks very much like FP code.  Because of this, I’ve said for some time that the two paradigms are converging.  This has been based primarily on the experimental approach I’ve talked about here.  But if we look at this situation with OCP – what we’ve basically shown is that a class isn’t SOLID if it maintains state (again, barring strictly-data types from this discussion).  A class with just behavior is very close to being just a namespace with a set of functions in it.

All this being said, I believe even more strongly that the paradigms are converging.  Furthermore, I’m fairly convinced that there are underlying principles that dictate this.  Both paradigms seek to make code “easy to reason about” (to use the FP mantra) though they come at it from different angles.  But in the end, they’re shooting to engage the same mechanism – our human instinct to reason – in the most efficient way possible.  After all, what’s more aesthetically pleasing than that which fully engages the instincts it targets in us.

Partial Mocking and Scala

13 Thursday Dec 2018

Posted by Kyle in Scala, TDD, Technique

≈ Leave a comment

I don’t know how to get this across with the level of excitement it brought to me as I discovered it.  But I’ll try.

I love TDD – the clarity, focus, and downright beauty it brings to software creation is absolutely one of my favorite parts of doing it.  I love the learning that it drives, and I love the foundation it lays for a masterpiece of simplicity.  It’s unparalleled as a programming technique — I only wish I would have caught on sooner.

I love Scala.  I can’t seem to find the link – but there was a good list about how to shoot yourself in the foot in various languages – in Scala, you stare at your foot for two days and then shoot with a single line of code.  The language is amazing in its ability to let you get across your meaning in a radically concise, but type-safe way.  I often find myself expressing a thorough bit of business logic in one or two lines.  Things that would have taken 20-30 lines in a typical C-derivative language.  It’s a fantastic language.

Writing Scala – I’ve gotten into a situation that finally, powerfully, crystallized in an experience this morning.  I spent probably an hour struggling to get at the most understandable, most flexible solution.

The situation is this – I have a class that’s definitely a reasonable size – 30-50 lines or so.  In this case, most of the methods were one-liners.  And they were one-liners that built on each other.  The class had one Responsibility, one “axis of change”.  I liked it as it was.

One problem that arose was that one of the methods was wrapping some “legacy code” (read: untestable – and worse unmockable).  In my Java days – this wouldn’t have even arisen as a problem, because the method using the legacy code would have probably warranted its own class, and thus I could have easily just mocked that class.  As it was, I considered it.  But as I said, the class was very expressive, and said as much as it should have, without saying any more.  To cut a one line method and make it a one line class…would have bordered on ridiculous – it would have been far too fine grained at any rate.

So what’s a code-monkey to do?  Well – I tripped across this idea of a partial mock.  Which, I would have derided as pointless in my Java days – and in fact, the prevailing wisdom on the interwebs – was that partial mocking is bad.  I don’t want to do bad.  By the way, if you haven’t googled it already – partial mocking is simply taking a class, mocking out some methods, but letting others do their original behavior (including calling the now mocked methods on the same class).

Anyway – the more I stared at the problem and balanced the two forces at play, the more I realized how right the solution really is.  In my experience, in Scala, the scenario I just laid out is common, and the only real way to solve for it is with partial mocking.

(Big thanks to Mockito for providing this capability – so awesome!)

Why Patterns Suck

28 Tuesday Aug 2018

Posted by Kyle in Technique

≈ 2 Comments

Tags

design, emergent design, software, Software Craftsmanship

I was at a jazz jam session a few years back. I didn’t know it then – but I was learning a valuable lesson about design patterns.

Music, in its similarity to software, has a class of design patterns called scales (think Sound of Music – “doe re mi fa so..”).  Scales help musicians to have an intuitive understanding of the harmonic relationships between different pitches.  These relationships have mathematical connections that lead to a lot of simplification and a lot of carrying lessons learned in one context to another context.  There is a lot of value in deeply thinking about these relationships, and getting them “under your fingers” – getting an intuitive feeling for them by playing them.

The jazz solo is an interesting thing – it’s a time for a musician to attempt to convey feeling to the listeners, following as few rules as possible.  Though there are a lot of underlying laws to how to create certain feels….most musicians, in order to be able to express feeling in real-time, work hard to have an intuitive grasp of these laws.  Thinking through the specific details of the performance while performing would be impractical, and it would destroy the original inspiration.  Hence, musicians have located patterns (such as scales) – that allow them to work on that grasp when not performing.

After stepping down from my solo (which was undoubtedly life-changing for the audience) … another soloist took the stage.  He played scales.  For the whole solo.

A fellow listener leaned over and whispered in my ear about the ineffectiveness of the approach….in more colorful language.

Scales…..like design patterns in any domain are for developing intuitive understanding of the space.  They are NOT to be included for their own sake, thoughtlessly in the actual creation.

I’ve seen this a couple of times, at grand-scale, in software.  In the early 2000’s – I can’t remember how many singletons I saw cropping up all over the place (yeah, I may have been responsible for a few of those)…many, many times very unnecessarily.

These days there are a number of patterns that get picked up and used wholesale (with little thought) – MVC, Monad, Lambda, Onion, etc..  This is not how great software is written.  Like music – the domain has to be well-understood, and then the thing created from that understanding.  Picking up the design patterns, whether they’re scales or singletons, and instead of using them in private to gain understanding, we pass them off as creation, we are using them in exactly the most wrong (and harmful) way.

It will make our software worse – decreasing our understanding, and increasing the complexity of our software by creating code that doesn’t match the problem.

 

 

The Case for Scala

30 Friday Jun 2017

Posted by Kyle in General, Technique

≈ Leave a comment

Why take sides if you don’t have to?  It’s uncomfortable; people get upset with you for not seeing things their way; you find yourself spending a lot of time thinking on and defending your position.  Thinking is hard, am I right?!

In technology, things change so much – new tools rise and fall – new techniques rapidly give way to newer techniques.  And large swaths of tech are based on the particular preferences or biases of their creators and/or the community that uses them.  Many times choosing one tool over another is only a matter of taste. Don’t get me wrong – informed taste is the underpinning

hammerof great software.  A set of tools, though, all sharing the same basic properties but offering different ways of going about them – sharing substance, differing in style – make the idea of picking tech less like “picking the right tool for the job” and more like “do you want a red handle on your hammer or a pink one with sparkles?”.

It is INSANE to expend significant energy on the handle of your hammer.  Pick what you like and use it – don’t try to convince anyone to come along and ignore anyone who says red is better than pink.

There are, though, tools that are demonstrably better.

Scala is one of these.

Being a better tool depends on what kind of job you are trying to accomplish.

If you are a software craftsman – focused on creating software that is elegant, just because you can…..if you know that clean, clear code leads to better software in the hands of your users…..if you respect the next developer to look at the codebase enough to leave them a work of art rather than a work of excrement…..then Scala is one of the most premium tools available today.

Why?

There are several reasons – they are fairly subtle – but they add up to granting a deeper ability to craft great software to the person interested in doing such a thing.

#1 – It has a type-system.  Type-systems are like free unit tests.  Unit tests are good.

#2 – It leverages both the object metaphor and the functional/mathematical metaphor – giving you a great degree of expression.  It subtly encourages good stateless, immutable, referentially transparent structuring of your code, but gives you the flexibility to not be that way if you judge the situation to warrant it.

#3 – It (while having great static typing) offers an extremely malleable language, with plenty of syntactic sugar to help you get your point across in the most concise way possible.

scala-hammer All of these come down to the fact that the language is more expressive than anything else available.  And all of this while running on the JVM – providing a battle-tested runtime environment and a time-tested community of brilliant developers that consistently create the most advanced eco-system around.


Choose Scala
.  You will write better software.

 

← Older posts

Recent Posts

  • Killing Software Development Capacity, Part 3: Superficial Thinking
  • Killing Software Development Capacity, Part 2: Philosophizing
  • How To Use Systems Thinking To Destroy Your Team
  • The Text-Native Communicator, Remote Work & Serendipity
  • Silos Are A Good Thing ™

Archives

  • February 2020
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • June 2019
  • May 2019
  • April 2019
  • December 2018
  • August 2018
  • July 2018
  • June 2017
  • May 2017
  • February 2017
  • January 2016
  • November 2015
  • June 2014
  • October 2013
  • January 2013
  • September 2012
  • August 2012

Categories

  • agile
  • General
  • Management
  • Philosophy
  • Scala
  • SOLID
  • TDD
  • Technique
  • Uncategorized

Meta

  • Register
  • Log in
  • Entries feed
  • Comments feed
  • WordPress.com

Create a free website or blog at WordPress.com.

Cancel

 
Loading Comments...
Comment
    ×
    Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
    To find out more, including how to control cookies, see here: Cookie Policy