Wednesday, May 4, 2011

Technical debt

In our profession we sometimes find that it is hard to find an layman analogy that properly describes tendencies and properties that characterize our craft.

A codebase can be purposely characterized as being “rigid” or “brittle”. We may laugh at how appropriately it describes specific components. But aside from a good laugh, this is something that we should be more familiar with as it helps to communicate to other professions than programmers.

Sometimes an analogy is so powerful that it becomes an industry standard.

One such term was coined by Ward Cunningham who worked with clients in a financial institution and was bogged down by codebase riddled with issues. The end functionality was satisfactory and the clients were initially unwilling to invest 2-3 months of work without an increase in added business value just to “improve the quality of the code”.

The qualities of the codebase, aside from functionality, were invisible to the clients. They did notice, however, that:

· Estimates more frequently went into weeks.

· Small changes broke unrelated stuff.

· Programmers are reluctant to visit certain parts of the system.

Cunningham introduced the term “Technical debt”. And with this simple context switch, suddenly its financial aspects were apparent to the client.

It simply means that you have borrowed something, most commonly time, from somewhere. As with any debt, you will have to keep paying your mortgage until you have paid it back. If you decide to not pay, interest will accumulate.

It doesn’t have to be time, it may be understanding the component or technology. You have found a nice little place where you can insert a discrete “if-then” sidetrack to patch up or filter away some data that passes through some random snippet. You are both confident that you haven’t broken anything and you count yourself lucky that you didn’t have to understand the complete context of the snippet.

The problem is that the code rots. If you never decide to give pay the borrowed time back, you end up with most functions not doing what they claim to be doing. The code is not coherent at any one level. You need to constantly deep dive to figure out how it “really” works. Soon, “if-then” is all that you dare do.

Technical debt will accumulate exponentially, especially since management will frown upon a decrease in productivity. So, the developer “steals” more time from the pool of technical debt.

Technical debt is, like ordinary debt, something that is not inherently bad, but it should be responsibly managed. It is a tool. If a business never went into any kind of debt, it would lose all agility and could not respond to its market, but would be at the complete mercy of its technical underpinnings.

Monday, October 25, 2010

Working at granularity of functions instead of Unit-Of-Work

The Problem

When I develop tools for internal use, my users are process engineers. Tech savvy people, but not coders. The problem domain is of a technical nature which should be the perfect place to create tooling for automating the project production. We spend thousands of hours every year, as do every competitor, on checking numbers, transposing tables and datasets using ordinary office software. We are delivering at quality and cost, but still inefficient.

We’ve all experienced the users with a “vision”. We’ve had it ourselves. The big framework in the sky. The game changing application that does everything and knows everything. Attempts at making this application become bloated, costly, maintenance heavy and often fail to materialize.

Consider the typical setup of such Engineers. There is a big project about to be delivered. The project can be broken down into sub systems. And then the principal engineer distributes the work load between the available resources:

image

All engineers do basically the same work, but on different parts of the system. The complexity of the end product decides the required skill level of the engineer. A complex subsystem will be handled by a senior engineer.

In reality, this is a false premise. The complexity of the end product is irrelevant. What matters is the complexity of the work to produce it.

When re-examining the work flow in order to automate it, it is more interesting to let the workflow be influenced by the emergent properties of the new problem domain, which is the project execution, not the end product.

image

When working in a tight loop with this person we compose sovereign functions that various (disconnected) parts of his workload.

image

Basically, I have created one empowered user. One user who can field test the functions. Note that I haven’t created any application yet, only functions. There is an execution engine. But at this point the functions are tailor made to the individual user.

image

If you were to cram all these functions in the same app, you would incur a significant maintenance debt. The functions behave differently and change for different reasons. The volatility of the end app is the sum of all volatility in its parts. Immature functions will immediately clutter the design, both architecturally and the GUI.

When field testing the functions, it is common to group these functions chronologically or by subsystem. However, there is another, often overlooked, aspect to consider. Namely, the complexity of applying/executing the function to/against the workload.

image

When field testing the functions, new classifications emerge.

  • Some functions (green) can be run straight out-of-the-box. A button would do.
  • Some functions (yellow) demand tweaking in a few places. They can be executed, but with options.
  • Some functions(orange), however, demand unpredictable tweaking and its parameterization is hard to identify.
  • And finally you have the functions (red) that seemingly demand the same complexity as the computer code itself.

To cram all such responsibilities into a single application, is IMO a heavy contributor to application bloating, code rot and the ol’ saying “software is never done, it is abandoned”. The problem is that application developers will fail to properly absorb user requirements in the red and orange category. In an organization where the developer should not be assumed to know the complexity of the problem domain, this is very costly indeed. Especially if your toolchain has an expensive and slow release process.

The solution

You examine options for implementing a GUI for the orange/red category. Instead, you let the principal engineers earn their pay.

image

Also, as we keep running the functions, we learn more about them, and we are able to “push” them towards the "green” category. And, as we are discovering new aspects of our work flow, functions can be spawned in the yellow/orange category and through field testing end up in the green/yellow category.

So, NOW, we create GUI.

image

Yes, I’m serious. The principal engineer is actually handling computer code (C#) in production. This model has some additional key benefits that may not be obvious.

  • The whole workforce is now a living design process.
  • As they mature, functions gravitate towards the green.
  • App1 is very simple with functions and work flow that is well tested before its first version was released. It handles the bulk of the work. Changes are easy, safe and inexpensive.
  • Apps of type “App2” may be more complex, but its complexity is warranted and it is towards a more focused problem domain as well as a dedicated audience. Typically specialized roles/contexts. Stuff you don’t want to clutter up the common case with.
  • Functions of a complex nature, one-shots or that people have trouble defining can be put into production in orange and red category, and let it mature for a while.
  • There is a natural progression and common ground between the developer and the principal engineers.
  • The process is no longer constrained by the GUI and is never blocked. Whenever the App1 does not cover something because of some rare case, chances are that the principal engineer can run the underlying function. We do not need to put a GUI harness around every single conceivable usage.

And, it friggin’ works.

Saturday, October 23, 2010

To revolutionize the unknown at App version 1.0

How do you create a seasoned application already on version 1.0? One that completely revolutionize a field/discipline that you as a Software Craftsman know little about?

Can it be done? How is it possible to create something that neither you nor your customer can’t be assumed know up front. Nor should either of you be expected to know. Sure, sometimes people have bright ideas, but the model is potluck and doesn’t scale.

In traditional waterfall, the user is expected to outline his requirements even though his expertise is the current (obsolete) work  model that you shouldn’t be constrained by. Also, his experience with the work model is the execution of the work model,  not the work model itself.

In our world of outsourcing, the distance between the supplier of the tools and the place where the work is executed becomes considerable. Even in our little organization of a few thousand employees.

So how do you do it?

The old model is to enter the process by connecting a few dots from his existing work flow from a fresh perspective. You can give the work flow a face lift by reducing the amount of clicks and giving more sensible GUI. You can represent something in a rich grid. You can do a parent-child separation to remove a couple of small popup dialogs. But the result is in danger of being like a horse carriage with mechanical horses instead of inventing the automobile.

How do you go about revolutionizing a labor intensive work flow that neither you or the customer should be expected to have broken down up front.

You do it by pair programming with a non-programmer. You perform the work alongside the person. You watch him work over his shoulder, you ask, you nag, you challenge and you reward. You switch back and forth between annoying the hell out of them and blowing their minds.

In my current work model, LinqPAD is a key component in this. The other person is able to field-test a function just seconds after I have written it. At the end of a cycle, I have created an empowered person equipped with functions that he can bring to the field. Never losing productivity during the process.

And when we have the complete set of functions that are both complete, validated and field tested…. then we are ready to create App version 1.0 based on wealth of experience from seasoned functions all in a revolutionized work model…. without it sounding contradictive anymore.

Wednesday, August 11, 2010

Empowering the alpha geeks using LinqPAD

Sometimes, you find that the users are destined to be constrained by their GUI. That the developers of the tool chain simply cannot satisfy all consumers of said tools. The complex domain of their business model dictates a complexity that will in turn impede release cycle time and you will never be able to bridge the gap. Ironically, the sum of the user experience is more complex than the underlying reality that it is designed to simplify.

“If you can’t beat the enemy, join them”. I like the reverse even more: if they can’t beat you, they should join you.

So, I’ve decided that a particular subset of my users should learn programming. The C# language is expressive and rich. Combined with a rich object model, I may have sugar coated the pill sufficiently to make them take the leap. Suddenly I am not constrained by having to change the GUI anymore, because there is no GUI.

I have found the perfect bed partner for this endeavor. It is combined development and execution engine for C# snippets that operate against a hidden object model as well as a snippet repository. I am working full throttle to ready an object model to be available for the next launch.

Bjarne Stroustrup said "I wouldn't like to build a tool that could only do what I had been able to imagine for it."

My hope is that the new breed of users will be able to push the envelope to reach new heights of productivity and quality (through automated and assisted audit functionality) and that desired functionality will come in the form of enrichment of the object model as opposed to dabbling in intricacies of how they plan to consume them.

Tuesday, August 10, 2010

Making a maintainable new library of legacy snippets

I recently went through a typical exercise of integrating an existing implementation from a legacy component into a new component with a cleaner API and higher level of functionality. The legacy component interfaced a OPC library and used a lot of magic numbers, memory offsets, type conversions etc.  The existing component did not have functions for systematic reconnect, subscribe group status, book keeping for multiple groups etc. The existing code does everything necessary, but the code is not organized to have a clean higher level API. The code is also pretty fragile. If you make changes and then suddenly find that something has stopped working, you have hours of tedious debugging ahead of you.

A typical scenario where you normally would copy/paste from the older code base and end up with snippets here and there in your code which would have dependencies to arbitrary components. And, if you were to create unit tests, they would only mirror the production code and basically be redundant.

I decided to do it differently and I am happy with the result. The process may seem like over engineering, but it didn’t feel like it while doing it nor do I feel that the components are difficult to change.

The first challenge was to identify snippets. I do not plan to put those snippets under tests. It’s black magic. But I do plan on testing them.

 

So I extract the snippets

image

EDIT! The arrow should be pointing from legacy snippets towards the adapter, obviously

Now I run Online tester, which operates on the live system. I have functions such as:

  • Add member
  • Create Group
  • Change Group status
  • Remove Member

If I try to add members without taking the group offline first, the app will crash. The point of this exercise is to discover the “rules” of the Adapter. These are rules at the correct level of abstractions and they have a natural place in unit tests. How the legacy snippets actually go about taking the group offline, is not my concern and can only be meaningfully tested against the live system anyhow. There is automated testing done through this app, but they are integration tests and must be set up and kicked off manually.

image

EDIT! The arrow should be pointing from legacy snippets towards the adapter, obviously

This is the result. I have now five projects instead of one (not counting any client) but I feel that I have harnessed the legacy snippets to serve any new component that comes along. Whenever a new version of the live system is released, I need to retest the legacy code, but I do not have to permutate  through various compositions of those snippets to confirm validity of utility functions, as that is covered in the unit tests. I rarely do any change to the Ugly Legacy Code (ULC, pronounced ulcer, he he).

The Helper function library is growing quite substantially and with clean unit tests. I have successfully harnessed, and extended upon, a complete mess of a code.

Friday, May 21, 2010

Adopting the blind spots.

A clear distinction between the master and the apprentice is the mental model that they operate from. If a relevant concept is clearly absent from your thought process altogether, it can be described as a disability.

I am currently visiting legacy code written by novice programmers. This code base clearly demonstrate a lack of separating microsecond, millisecond and second domains. Disk access, sql server access and web service in the middle of loops. Although there was an elaborate architecture to separate the service level from the data level, comprehension of the penalties involved by ping-ponging between them in the control flow timeline appears to have been void.

The code made absolutely no sense until I started adopting the thought process behind it. And suddenly I was able to leverage the blind spot, foreseeing likely sore spots.

 

Socrates said that the hallmarks of an educated mind is to be able to entertain a notion without adopting it.

Monday, May 3, 2010

Software Craftsman Yule Calendar: 22nd of December

Todays word is “Pair programming”

Two persons, one keyboard, preferably with two duplicated screens.

The keyboard passed back and forth.

The pair does not have to have equal time on the keyboard. Just multiple people present and productive software engineering being performed, then it is pair programming.

Passing the keyboard gives you the off-beat repertoire of focus change, picking up long term concerns. The brain gets its timeout without you grabbing a coffee, visiting the cubicle next to you, checking the mail or opening the web browser.

Between them:

  • Blind zones and derailing are reduced to the least common denominator.
  • Experience and resources is more than the sum of the two.

“Benefits of pair programming” is obvious to anyone who have given it a serious go. Rather, I’d rather be discussing “the risks of solo programming”.

Having to sell every story, you learn to communicate intent, to avoid tangents, to break out of your mentality in order to try the other perspective on for size.

And it’s social. Software is more and more about people, processes, communication and interaction.

Something that I think this post’s co-author, Anne, agrees with.