Monday, October 25, 2010

Working at granularity of functions instead of Unit-Of-Work

The Problem

When I develop tools for internal use, my users are process engineers. Tech savvy people, but not coders. The problem domain is of a technical nature which should be the perfect place to create tooling for automating the project production. We spend thousands of hours every year, as do every competitor, on checking numbers, transposing tables and datasets using ordinary office software. We are delivering at quality and cost, but still inefficient.

We’ve all experienced the users with a “vision”. We’ve had it ourselves. The big framework in the sky. The game changing application that does everything and knows everything. Attempts at making this application become bloated, costly, maintenance heavy and often fail to materialize.

Consider the typical setup of such Engineers. There is a big project about to be delivered. The project can be broken down into sub systems. And then the principal engineer distributes the work load between the available resources:

image

All engineers do basically the same work, but on different parts of the system. The complexity of the end product decides the required skill level of the engineer. A complex subsystem will be handled by a senior engineer.

In reality, this is a false premise. The complexity of the end product is irrelevant. What matters is the complexity of the work to produce it.

When re-examining the work flow in order to automate it, it is more interesting to let the workflow be influenced by the emergent properties of the new problem domain, which is the project execution, not the end product.

image

When working in a tight loop with this person we compose sovereign functions that various (disconnected) parts of his workload.

image

Basically, I have created one empowered user. One user who can field test the functions. Note that I haven’t created any application yet, only functions. There is an execution engine. But at this point the functions are tailor made to the individual user.

image

If you were to cram all these functions in the same app, you would incur a significant maintenance debt. The functions behave differently and change for different reasons. The volatility of the end app is the sum of all volatility in its parts. Immature functions will immediately clutter the design, both architecturally and the GUI.

When field testing the functions, it is common to group these functions chronologically or by subsystem. However, there is another, often overlooked, aspect to consider. Namely, the complexity of applying/executing the function to/against the workload.

image

When field testing the functions, new classifications emerge.

  • Some functions (green) can be run straight out-of-the-box. A button would do.
  • Some functions (yellow) demand tweaking in a few places. They can be executed, but with options.
  • Some functions(orange), however, demand unpredictable tweaking and its parameterization is hard to identify.
  • And finally you have the functions (red) that seemingly demand the same complexity as the computer code itself.

To cram all such responsibilities into a single application, is IMO a heavy contributor to application bloating, code rot and the ol’ saying “software is never done, it is abandoned”. The problem is that application developers will fail to properly absorb user requirements in the red and orange category. In an organization where the developer should not be assumed to know the complexity of the problem domain, this is very costly indeed. Especially if your toolchain has an expensive and slow release process.

The solution

You examine options for implementing a GUI for the orange/red category. Instead, you let the principal engineers earn their pay.

image

Also, as we keep running the functions, we learn more about them, and we are able to “push” them towards the "green” category. And, as we are discovering new aspects of our work flow, functions can be spawned in the yellow/orange category and through field testing end up in the green/yellow category.

So, NOW, we create GUI.

image

Yes, I’m serious. The principal engineer is actually handling computer code (C#) in production. This model has some additional key benefits that may not be obvious.

  • The whole workforce is now a living design process.
  • As they mature, functions gravitate towards the green.
  • App1 is very simple with functions and work flow that is well tested before its first version was released. It handles the bulk of the work. Changes are easy, safe and inexpensive.
  • Apps of type “App2” may be more complex, but its complexity is warranted and it is towards a more focused problem domain as well as a dedicated audience. Typically specialized roles/contexts. Stuff you don’t want to clutter up the common case with.
  • Functions of a complex nature, one-shots or that people have trouble defining can be put into production in orange and red category, and let it mature for a while.
  • There is a natural progression and common ground between the developer and the principal engineers.
  • The process is no longer constrained by the GUI and is never blocked. Whenever the App1 does not cover something because of some rare case, chances are that the principal engineer can run the underlying function. We do not need to put a GUI harness around every single conceivable usage.

And, it friggin’ works.

Saturday, October 23, 2010

To revolutionize the unknown at App version 1.0

How do you create a seasoned application already on version 1.0? One that completely revolutionize a field/discipline that you as a Software Craftsman know little about?

Can it be done? How is it possible to create something that neither you nor your customer can’t be assumed know up front. Nor should either of you be expected to know. Sure, sometimes people have bright ideas, but the model is potluck and doesn’t scale.

In traditional waterfall, the user is expected to outline his requirements even though his expertise is the current (obsolete) work  model that you shouldn’t be constrained by. Also, his experience with the work model is the execution of the work model,  not the work model itself.

In our world of outsourcing, the distance between the supplier of the tools and the place where the work is executed becomes considerable. Even in our little organization of a few thousand employees.

So how do you do it?

The old model is to enter the process by connecting a few dots from his existing work flow from a fresh perspective. You can give the work flow a face lift by reducing the amount of clicks and giving more sensible GUI. You can represent something in a rich grid. You can do a parent-child separation to remove a couple of small popup dialogs. But the result is in danger of being like a horse carriage with mechanical horses instead of inventing the automobile.

How do you go about revolutionizing a labor intensive work flow that neither you or the customer should be expected to have broken down up front.

You do it by pair programming with a non-programmer. You perform the work alongside the person. You watch him work over his shoulder, you ask, you nag, you challenge and you reward. You switch back and forth between annoying the hell out of them and blowing their minds.

In my current work model, LinqPAD is a key component in this. The other person is able to field-test a function just seconds after I have written it. At the end of a cycle, I have created an empowered person equipped with functions that he can bring to the field. Never losing productivity during the process.

And when we have the complete set of functions that are both complete, validated and field tested…. then we are ready to create App version 1.0 based on wealth of experience from seasoned functions all in a revolutionized work model…. without it sounding contradictive anymore.

Wednesday, August 11, 2010

Empowering the alpha geeks using LinqPAD

Sometimes, you find that the users are destined to be constrained by their GUI. That the developers of the tool chain simply cannot satisfy all consumers of said tools. The complex domain of their business model dictates a complexity that will in turn impede release cycle time and you will never be able to bridge the gap. Ironically, the sum of the user experience is more complex than the underlying reality that it is designed to simplify.

“If you can’t beat the enemy, join them”. I like the reverse even more: if they can’t beat you, they should join you.

So, I’ve decided that a particular subset of my users should learn programming. The C# language is expressive and rich. Combined with a rich object model, I may have sugar coated the pill sufficiently to make them take the leap. Suddenly I am not constrained by having to change the GUI anymore, because there is no GUI.

I have found the perfect bed partner for this endeavor. It is combined development and execution engine for C# snippets that operate against a hidden object model as well as a snippet repository. I am working full throttle to ready an object model to be available for the next launch.

Bjarne Stroustrup said "I wouldn't like to build a tool that could only do what I had been able to imagine for it."

My hope is that the new breed of users will be able to push the envelope to reach new heights of productivity and quality (through automated and assisted audit functionality) and that desired functionality will come in the form of enrichment of the object model as opposed to dabbling in intricacies of how they plan to consume them.

Tuesday, August 10, 2010

Making a maintainable new library of legacy snippets

I recently went through a typical exercise of integrating an existing implementation from a legacy component into a new component with a cleaner API and higher level of functionality. The legacy component interfaced a OPC library and used a lot of magic numbers, memory offsets, type conversions etc.  The existing component did not have functions for systematic reconnect, subscribe group status, book keeping for multiple groups etc. The existing code does everything necessary, but the code is not organized to have a clean higher level API. The code is also pretty fragile. If you make changes and then suddenly find that something has stopped working, you have hours of tedious debugging ahead of you.

A typical scenario where you normally would copy/paste from the older code base and end up with snippets here and there in your code which would have dependencies to arbitrary components. And, if you were to create unit tests, they would only mirror the production code and basically be redundant.

I decided to do it differently and I am happy with the result. The process may seem like over engineering, but it didn’t feel like it while doing it nor do I feel that the components are difficult to change.

The first challenge was to identify snippets. I do not plan to put those snippets under tests. It’s black magic. But I do plan on testing them.

 

So I extract the snippets

image

EDIT! The arrow should be pointing from legacy snippets towards the adapter, obviously

Now I run Online tester, which operates on the live system. I have functions such as:

  • Add member
  • Create Group
  • Change Group status
  • Remove Member

If I try to add members without taking the group offline first, the app will crash. The point of this exercise is to discover the “rules” of the Adapter. These are rules at the correct level of abstractions and they have a natural place in unit tests. How the legacy snippets actually go about taking the group offline, is not my concern and can only be meaningfully tested against the live system anyhow. There is automated testing done through this app, but they are integration tests and must be set up and kicked off manually.

image

EDIT! The arrow should be pointing from legacy snippets towards the adapter, obviously

This is the result. I have now five projects instead of one (not counting any client) but I feel that I have harnessed the legacy snippets to serve any new component that comes along. Whenever a new version of the live system is released, I need to retest the legacy code, but I do not have to permutate  through various compositions of those snippets to confirm validity of utility functions, as that is covered in the unit tests. I rarely do any change to the Ugly Legacy Code (ULC, pronounced ulcer, he he).

The Helper function library is growing quite substantially and with clean unit tests. I have successfully harnessed, and extended upon, a complete mess of a code.

Friday, May 21, 2010

Adopting the blind spots.

A clear distinction between the master and the apprentice is the mental model that they operate from. If a relevant concept is clearly absent from your thought process altogether, it can be described as a disability.

I am currently visiting legacy code written by novice programmers. This code base clearly demonstrate a lack of separating microsecond, millisecond and second domains. Disk access, sql server access and web service in the middle of loops. Although there was an elaborate architecture to separate the service level from the data level, comprehension of the penalties involved by ping-ponging between them in the control flow timeline appears to have been void.

The code made absolutely no sense until I started adopting the thought process behind it. And suddenly I was able to leverage the blind spot, foreseeing likely sore spots.

 

Socrates said that the hallmarks of an educated mind is to be able to entertain a notion without adopting it.

Monday, May 3, 2010

Software Craftsman Yule Calendar: 22nd of December

Todays word is “Pair programming”

Two persons, one keyboard, preferably with two duplicated screens.

The keyboard passed back and forth.

The pair does not have to have equal time on the keyboard. Just multiple people present and productive software engineering being performed, then it is pair programming.

Passing the keyboard gives you the off-beat repertoire of focus change, picking up long term concerns. The brain gets its timeout without you grabbing a coffee, visiting the cubicle next to you, checking the mail or opening the web browser.

Between them:

  • Blind zones and derailing are reduced to the least common denominator.
  • Experience and resources is more than the sum of the two.

“Benefits of pair programming” is obvious to anyone who have given it a serious go. Rather, I’d rather be discussing “the risks of solo programming”.

Having to sell every story, you learn to communicate intent, to avoid tangents, to break out of your mentality in order to try the other perspective on for size.

And it’s social. Software is more and more about people, processes, communication and interaction.

Something that I think this post’s co-author, Anne, agrees with.

Software Craftsman Yule Calender: 21st of December

Todays topic: “The Jury is in on TDD, there are no more excuses”

  • Test driven development is superior
  • Test driven design is superior

Patient and meticulous grinding at the base level. Driven by the precise execution of core disciplines, reverberating chants of  Red-Green-Refactor down along the aisles of craftsmen.

TDD forces you to taste your own API. The customer comes first. No work is done until the desired task is purposely expressed in jargon meaningful to the client.

You suddenly start to work with a flexible base material. You keep it and yourself flexible by you continuously flexing it.

It is a coding learning experience where the feedback is immediate. Instead of a feedback cycle of hours to weeks, the slap on the wrist is immediate with the guilty misconceptions and the accomplicing mind set still at the scene of the crime.

  • Of course it’s a new discipline.
  • Of course you will be worse before you get better.
  • Of course it takes a different mind set to be the bringer of change rather than hoping to not be awakening the beast.
  • Of course you wont find the flow from the get-go.

BUT:

  • When you have the assurance of the green light and due confidence in its significance.
  • When you play for keeps, knowing that people will have to get by your tests to break it.
  • When you don’t have that uneasy feeling of having to go back to recheck something.
  • When you know that you check in better code than you checked out.
  • And you know it works.

…. then it will start to feel unnatural, and unprofessional, to do anything else.

Sunday, May 2, 2010

Software Craftsman Yule Calender: 18th of December

Todays topic "The only way to go fast is to go well."

(Before starting this little series on the software craftsmanship yule calender, I approached uncle Bob to see if he would pitch in. This was one of his contributions. He has several talks of this, much of which is used in this blog post.)

The Paradox:
  • How many of us have been considerably delayed by bad code?
  • How many of us have written bad code to save time?

Consider a new project.
Everybody is motivated. No legacy code to slow us down. We have the newest toys available. Productivity will be unprecedented blowing everybody away.
And everything blasts off exactly as we thought. Magic sparkles between our fingertips and keyboard. The rest of the team is mesmerized. The project manager becomes bold. His faith in the programmers is restored. They have his back. He's got theirs.

Then, gradually something changes. Estimated starts creeping from days to weeks. Certain things are constantly facing small setbacks both in terms of feasibility and estimates. Developers start checking in code that they hope have seen their last revision for a while. The components starts getting personal with who is fit to be working on them without breaking stuff. Productivity suffers. Arbitrary flexible deadlines are postponed. Inflexible deadlines starts forcing stuff.

The mistrust starts accummulating. The external behaviour of the developers starts to reminisce lazyness and sloppyness. After all, why else would the tasks that was complete in days now take weeks to be half done? If parts of the team has been replaced in the meantime, one might also assume that one is left with a worse group of programmers.

The problem is that the team has created a mess since day one.
The deluded initial productivity was a result of stealing from one of the most unappreciated and obscruely measured liabilities of code ownership: design debt.

Mess accumulates. Mess begets more mess.

How do you deal with mess? You clean it up and you don't make it in the first place.
Don't cheat.
Don't assume you will come back later and clean it up.

When it comes to software, it never pays to rush.

Software Craftsman Yule Calender: 17th of December

Todays word: "Waterfall"

As stated before, it is important to not only recognize the faults of past paradigms, but also the fortés which gave rise to them in the first place.

Such a beast is waterfall. It was originally contemplated because software is difficult to change. That is its premise. Code commiting design. You need good upfront design before committing to code. And you need good requirement specs before committing to the design preceding the code.

All this changes when code no longer commits design. When software, when properly groomed, is painless to change. One of the primary motivations for waterfall does a 180 degree turn overnight.

But still, waterfall may still be suitable and sometimes even necessary.

Suitable:
* Irreduceable designs. Some designs are based on a design vision that would be hard pressed to be achieved incrementally.
* When specs will not change. When there is no reason to assume that there will be any learning curve underway from start to finish. Neither with the developer or the stakeholder. If I can get all specs up front, I will gladly receive them. I just don't think this will be true more than once or twice in a life time.

Necessary:
* Even though well crafted code now responds well to change, there are still elements to solutions which do not. Such as business contracts, hardware and third party stuff.

Software Craftsman Yule Calender: 16th of December:

Todays topic is "The gaffa tape programmer

I strive to hone my coding craft at any opportunity.
I don't see them to be at odds with any other priorities in the coding domain.

However, I do mash stuff together sometimes, what is otherwise known as "Gaffa tape programming".

I don't always strive for complete 100% coverage. I strive to identify pockets/domains within my code that warrant TDD. The more I can fit into those pockets, the better. The more I can compartmentalize the material, the better.

But when the building blocks are in place so that I from the offset can purposely express a end task in the suitable pseudo language that rise from existing components, I "mash" them together.

Yesterday, I noticed that the old comic book series "elf quest" was published on the net. (www.elfquest.com) as it no longer was going to be printed. You could browse the comic book online by clicking "next", "next" etc.

I made a web crawler to scrape it onto my hard drive. Three loops: one "do-while" and two "for-each". No tests.

Friday, April 16, 2010

Software Craftsman Yule Calender: 15th of December

Todays word is "Lean"

We're paid to produce and maintain code, right? So, by producing code we are productive. And by maintaining the same code, we are even more productive.

Well, not really. Without manifesting any business value, this is at best a zero sum game.

Code has cost of ownership.

Code made on assumptions is a business risk.

Code of poor quality is a liability.

Code or poor quality that must be maintained is a design debt.

Ill conceived concepts confuse, distract and disrupt within an organization.

As coders, we need for our managers to understand the nature of the beast that is code so that they can make informed decision.

Developers often complain about management, but have you ever considered that they may just be on the wrong page of their finance text book? That such-and-such wasn't really an assett aquisition scenario, but rather a risk management scenario.

Lean is about identifying business value to facilitate focused effort. Lean is about amplifying learning, defer decisions, delivering early value and respond with minimal overhead.

 

  • Define and identify values.
  • Define, indentify and innovate value streams.
  • Promote value flow.
  • All activity should have an active stakeholder downstream. Pull based.
  • Refine the process.

Friday, February 26, 2010

Software Craftsman Yule Calender: 11th of December

Todays word is "Agile"

Let's start by saying that if you have a cross and rigid definition about how to be agile, then that is a red flag already there. Some corner stones exist, though.

Iterative
Short cycles. Complete cycles. Reflecting the entirety of your business. From concept to deliverable. From "not done" to "done" or "not done and here's why..." . Not "half done" or "done, but...".

Incremental
The business assets shall have a net increase. Liabilities shall not exceed the investment of the ongoing cycle.

Cross functional teams
The team shall have the competence, ability and means to produce. The need for predicted external involvement shall not be of a nature or quantity than what can be assigned the SCRUM master. This is mostly to remove impediments, not engage externals with detailist assignments.

Self managing teams
Specialized assets within the team should naturally gravitate towards the appropriate tasks that requires them, but not at the expense of the team members assuming a collective ownership to the entirety of the end product.