Friday, December 11, 2009

Software Craftsman Yule Calender: 10th of December

"Perfection is achieved, not when there is nothing left to add, but when there is nothing left to remove"

Thursday, December 10, 2009

Software Craftsman Yule Calender: 9th of December

"Enhance, do not replace"
part 4


Previously
"Following a plan".
"Not only following a plan, but responding to change"
"Not only responding to change, but steadily adding value".

"Contract negotiations"
"Not only contract negotiations, but customer collaboration"
"Not only customer collaboration, but productive partnership"

"Processes and tools"
"Not only processes and tools, but individuals and interactions"
"Not only individuals and interactions, but a community of professionals"


Today
Being a bit code centric today. During our pursuit of the upper, we find the lower to be indispensible.

"Comprehensive Documentation"
"Working software over comprehensive documentation"
"Well crafted software over working software"


Would you have bought something expensive if nothing told you what it was or what it did?
Would you have bought a car that the salesman had trouble starting? And would it then be of any help that you have a thorough understanding of what the car [i]should[/i] have done?
Would you have bought the car if you opened the hood and was looking at a complete mess (also in the eye of a mechanic). The fact that the car actually [i]starts[/i] will again seem insufficient.

Our profession is first and foremost to produce actively used software that creates value for our business by creating value for its customers. In and of itself, code is not worth anything.
But existing codebase is also an assett to meet future solutions and challenges. For that, we need maintainable and clean code. Code is read more often than it is written.

Tuesday, December 8, 2009

Software Craftsman Yule Calender: 8th of December

"Enhance, do not replace"
part 3

Building on the existing theme, another gem ensues from the Agile and Craftsman manifesto
While still recognizing the former, we value the latter more.

Thus far:

"Following a plan".
"Not only following a plan, but responding to change"
"Not only responding to change, but steadily adding value".

"Contract negotiations"
"Not only contract negotiations, but customer collaboration"
"Not only customer collaboration, but productive partnerships"


Today:


"Processes and tools"
Processes are important. They provide map and compass in a undefined terrain. You can contribute to a piece of the puzzle without an elaborate concern to every other part. You can interface others in a formal process and know both what is expected from you and what is expected by others.

"Not only processes and tools, but individuals and interactions"
Processes are means to an end.
Initially, a process surrounding a discipline is based on the experience of Yesterdays efforts and challenges. When you start taking ownership of the problem domain, you'll be able to relate to the customer base more directly and able to elicit knowledge and leverage opportunities that might not be available through the process. You'll become approachable from the customer POV.


"Not only individuals and interactions, but a community of professionals"
Eventually, you'll see that process and interaction are not opposites. You will need to define the process alongside the product as an integral part of it. It needs to evolve with the product. The optimum process support a community where actual knowledge, insight and competence within the community are naturally appropriately applied. A community that takes a solidary ownership to the totality of the end product.

Monday, December 7, 2009

Software Craftsman Yule Calender: 7th of December

"The decision making triangle"
This one is from Extreme Programming (XP).

How is a technical decision made? What forces are we trying to unify in all our procedures, phase reports, business analysis, product reviews? How is a feature request spawned and what, when presented, argues for it/against it?

The decision making triangle has three components.

The technician, who provides estimates, cost, risk and quality impact.
The customer, who translates a fluid and chaotic business domain into a concrete set of durable features.
The economist, who manages the entire backlog relating cost effective activities to return of investement, company strategy and resources.

This does not always have to be three separate individuals, and could possibly more accurately be referred to as roles/hats/influences.
If there is a crucial support issue, there is obviously no need for an explicit "customer" and "economist" role in order to respond to the matter.
Also, pair programming has a tendency to encourage being explicit and a more elaborate discourse.
Alas, these concerns should often be explicitly adressed. Not only for the quarterly review, but as an integral part of the iterative and incremental process.

As I imagine most of my imaginary readers to be technicians, we may insist that all those estimates are unreasonable. Often this may be the case, but to bring the best estimate or even the knowledge of uncertainty into the mix, is the responsibility of the technician.

This triangle is the core of the decision process, not the entirety of the team. All of the three roles may have a team of individuals behind them to carry out the respective responsibilities.

The thing I want to focus on in this post is the value of a good customer. In this context, it is not the customer with the most money or willingness to spend. I am not necessarily talking about a person outside your own company.

Above all, the customer is not a developer who has received a four inch thick requirements document in his lap.

The customer is whoever who possess the underestimated, yet invaluable, quality of translating a business problem into addressable concerns. A customer is the one who identifies the underlying durable aspects that will form a sustainable business asset.

The good customer is he whose backlog is based on, and traceable back to, true business concerns.
An organization with good customers have a good knowledge of their value flow. They can allow their model to be pull based. There are less assumptions and less liability in their efforts.

The ideal customer is he who will need the least amount of code to make the most amount of money. The ideal customer is he who lives and breathes the problem with a passion, but has a minimum of preconceptions on how it is to be solved.

Friday, December 4, 2009

Software Craftsman Yule Calender: 4th of December

"Enhance, do not replace"
part 2.

Another concrete theme on which one gradually develop to challenge established mentality, procedures and practices. And also challenge your own position.

While still recognizing the former, we value the latter more.

Excerpt from Yesterdays calender
"Following a plan".
"Not only following a plan, but responding to change"
"Not only responding to change, but steadily adding value".


And for today
"Contract negotiations"
"Not only contract negotiations, but customer collaboration"
"Not only customer collaboration, but productive partnership"

More on the value of a good customer tomorrow, which will be "The decision triangle".

Thursday, December 3, 2009

Software Craftsman Yule Calender: 3rd of December

"Enhance, do not replace"

In these agile times, there are a lot of concepts that is beneficial to absorb.
But it is not necessary to do so at the expense of past knowledge.
Concepts may appear contradictive at face value, but this only means that they can only be successfully combined within a seasoned decision maker.

It is important to not only know why something in the past was bad, but also why it was good.
Not only knowing a rule, but knowing when to challenge the rule.
To identify our craft is an ongoing process with driving forces in context of existing paradigms. Don't just see the bottom line without its context.
See the train of thought, learn from history and aim to enhance on the collective knowledge that is the craft that you are a part of.
Don't just swap one doctrine with the next.

Identify with each of the following, and allow the natural progression to the next.

"Following a plan"
What can we do if we can't follow a plan? How can then others rely on us?

"Not only following a plan, but also responding to change"
We relate to a dynamic world. Specifications are sometimes truth and sometimes assumptions.

"Not only responding to change, but also steadily adding value".
Why assume that your present knowledge wont change in the future or that it hasn't overlooked or de-emhpasized something in the past?

Wednesday, December 2, 2009

Software Craftsman Yule Calender: 2nd of December

"Do no harm"
(The hypocratic oath revisited)

The Boy Scout rule is to leave the camp site better than you found it. This goes for coding as well as camping.

Simply check in better code than you checked out. Not much, necessarily, just a little.
Don't create a mess.
Never buy into the self delusion that you can come back and clean it up at a later time.
Don't let the code rot.

All other things in our posession:
  • The Architecture document.
  • The Design Document.
  • The Requirement specification.
  • The Test description.
  • The user guide.
All these things are mere derivatives of our true core; the code.

Where the code is a mess, everything is a mess.
Where the code is rotten, everything is rotten.

Software Craftsman Yule Calender: 1st of December

"Do not be blocked".

This is so fundamental that I considered a "thou shalt not...."

Don't wait for specifications.
Don't wait for clarifications.
Go with what you have. Influence the decision with what you know and what you can do.
Make functional samples, sketches, proto types. Duct tape is allowed.
Seize ownership. Become integral.
Learn the problem by spending time in the problem domain.
It is allowed to throw away software. Even newly connected neurons is a benefit.
Something tangible and concrete produces tangible and concrete feedback.

Introducing: The Software Craftsman Yule Calender

In this series, one for each workday up till Christmas, I'll explore an isolated piece of what I consider to be important factors of the professional software developer.

There may be repeptitions from earlier posts, but this time its succinct form will hopefully enhance its take-away value.

Tuesday, October 20, 2009

The duct tape programmer and the patterns geek.

Is there a middle road.

I just finished the book "Design Patterns In Action" and I am starting the "Refactoring to Patterns".

Then I find the following post by Joel Spoolsky:
http://www.joelonsoftware.com/items/2009/09/23.html

A danger of our undefined industry is addicting yourself to bits and pieces underway. You get pet peeves that are secondary to the actual business that your profession is a part of.

I am completely sold on TDD, but there certainly is a ladder for its business value with distinct stages.

If you don't have coverage or reliability on your tests, you can't refactor with a vengeance.
If you don't do tests first, you have significantly less chance of producing decoupled code.
If you can't refactor, you will still produce a mess.
If you can't refactor into patterns, you wont see the tests driving the design.

But, at the end of all this and you leverage TDD to its fullest, have you become overacademic along the way?

Does TDD, in all its glory, scale downwards?
Both with respect to project size as well as timeframe.

Are there scenarios where your excellent TDD skills scale so bad that they become an obstacle to getting things done and shipping it?

Robert Martin says "Do not go fast, go well".

My perspective is that IDEs that promote a test first discipline and powerful add-ons like Resharper or Coderush makes this discipline go faster and sturdier. Also, future architectures seem to promote logic and presentation in a more clear cut manner.

As I've stated in an earlier blog post, the jury is out and TDD won, but I appreciate the duct tape programmer perspective. Although I suspect he is a dying breed within the mainstream where people have to maintain code bases and incurred technical debt carry interests.

Thursday, October 8, 2009

When do you accept a Requirement spec.

A prime directive whenever you decide to leap to a new technology/methodology/discipline is to make sure that you have added value to your repertoire. You should GROW, not CHANGE. You should trash what does not work, but you should not trash apples for oranges.

Like Robert Martin says: "You should not only absorb why waterfall is bad, you should also know why it was good (at times)".

I have started to appreciate certain indicators when it comes to whenever a requirement spec is justified.
Notice the phrasing above; I put the onus of waterfall applicability on the person wishing to present me with a requirement spec. The burden of proof is on the person professing waterfall, but it defaults to Agile.


In his book "Clean code", Uncle Bob quotes several nestors within our field as to what their opinion is on clean code. Ward Cunningham comes with one of his characteristically apparently annoyingly redundant comments:"When the code is pretty much what you expect".
At first glance this is seems like a cop-out of answering the real question, but upon closer reflection, there are subtle brilliances within this very simple answer. Reading clean code should not make you pause, there should be little "flavours", nothing should feel forced, there should be no need to mentally context switch within a single class, the name of the class and class members should give you a reasonably good opinion as to what lies behind it.


For all my thoughts on the little nuggets of factors that combine into a good requirement specification, it basically boil down to the same:
When writing and reading it is all but a formality, then it is a justified specification.


The fabled animal "irreducably complex infrastructure" may still really exist, but you should still have a go at it with real tools before calling in the medium.

Wednesday, September 16, 2009

The decision making triangle

As an avid agilist, I maintain that the manager should do little more than putting the following three powers together.
* The techie.
* The funding.
* The customer.

These are the three roles to any "go"/"no-go" decision making on wether or not a particular technical endeavour is worthwhile.

The "funding" is simply someone who is able to decide cost against benefit. The financial backbone for a particular push within your portfolio is often quite obfuscated in a large organization. When you no longer produce bang for the buck, you stop.

The "customer" is simply the someone who can make a decision on a satsifactory solution to any given marketable feature. The customer represents the totality of the demands. In some cases, being an internal customer that is able to map external requirements to internal technical achievable backlog items is a true assett.
A good customer is hard to come by.

Together, the two can decide on where the efforts can give the most value.

The techie translates solution proposals into estimates.


The main discipline between these three is to identify the MMF's (minimum marketable features) that will individually add to the value of the product. It is sometimes hard for a customer to see that his problem is not an irreduceable all-or-nothing deal.
Again, consider the value of a good customer.

The customer is rarely an external person. When deciding on these issues, there should be as little vested interests (and politics) as possible. An external customer is likely to play up his requirements, is unlikely to be willing to vouch satisfaction at the drawing board and various scopes are subject to contract negotiations.

On small projects, one person can have all these hats.
On large projects, the "customer" can have a full time team of three people behind him doing nothing but mapping classification requirements to the proposed features.

Since the development departments are mostly techies, the two other roles are likely to suffer. Especially the customer part. We have many products where it is unclear who demanded what, and in what manner we fulfill certain requirements.

Every agilist should appreciate a customer who can think on his/her feet.

Friday, September 4, 2009

TDD nails it

Introduction

Imagine yourself TDD'ing a thermostat controlling a heater. Your first unit test is to set the temperatures in the fake thermometer as well as the Thermostat object, which is the unit under test, and to verify that it actually starts the heater.

Now, in the production code, a TDD beginner will find it tempting to check the temperature and then start the heater. However, the correct thing is to not check the thermometer, but just start the heater regardless of input. This is because this is all that is required to pass the first test. You don't need to check the thermometer before you have a test that would fail if you didn't check the thermometer.

If the TDD beginner followed through with his gut feeling and implemented the check right away and then proceeded to the next feature he would be able to remove the thermometer check, introducing a bug in his software, without any tests failing.

Thus removing the premise to safely refactor and changing without breaking.


TDD nails it.

Now, if you were using hammer and nails to fix a piece of wood (say, a board) in place, a novice could be equally "tempted" to hold the board in place and then just hit a nail in its center before moving on.

This would look all well and good until the forces started acting upon it, in which case the board would quite likely shift out of place.

Now, let us pretend that a piece of functionality is a board and that unit tests are nails.

By using just a single nail, both the piece of functionality and the board would seemingly be held in place by nails, but in reality the real upkeeper for both is friction. Friction on wood surface and friction in the code base. Without neither being be nailed in place.

When I implement a piece of functionality I certainly hope to having "nailed it"!

Thursday, September 3, 2009

"As simply as possible"

Yesterday, I held an internal TDD Introduction workshop for about 10 developers.

I had no slides, just winging it using VS2008 on a projector. And using the whiteboard.

The workshop was in two parts.
1. The common calculator that eventually become string based requiring a tokenizer. This is to push the Red-Green-Refactor envelope. I know : pretty boring, but you want to put the emphasis on the tight cycle of TDD.

2. A case with external dependencies.
The specific use case was a HVAC controller. It starts off as a simple thermostat to control a cooling fan, but evolves to a full fledged HVAC controller.
People discover that the API of a thermostat is principally different than the HVAC controller. For example, "Desired temperature" can be a single number in a thermostat, whereas the HVAC controller needs a desired temperature with an upper and lower limit. The pattern geeks may argue that the HVAC controller is actually two thermostats working in opposites (one for "cold", one for "heat"), which certainly makes for some interesting programming.

The three immediate takeaway benefits are:
* TDD forces you to think about your interface. And working in pairs produces some fruitful debate at a very early stage. Paradoxically, people suddenly find themselves unable to express their use cases against their own API (and then they start blaming me).

* They discover that there is a "secret" dependency on time. One dependency driving requirement is that the airfan should run for two minutes after the furnace has been switched off. Hopefully they'll muck around with DateTime static helpers before realizing that time must be controlled from the tests.

* The manual mocks versus isolation frameworks (Moq). The initial test cases make for slim manual fakes, but the Moq syntax is usually heartly received.

We only had half a day, which went by in the blink of an eye, but I think that we ran into many of the typical subjects.
There certainly is stuff that is hard to get even in the simple world of TDD. We met some speed bumps when we crossed the following:

Problem 1: Lambda expressions.
I have opted for our group to go for Moq as the isolation framework of choice. This is because of the crisp syntax and a lot of sense and intellisense. The most noted objection against Moq is that it makes heavy use of Lambda expressions and therefore is only available in the .NET 3.5. However, that is no concern for us.
We had some MFC programmers in the workshop. Even seasoned and capable programmers are visibly in pain when they try to wrap their heads around Lambda expressions. I wanted to showcase the evolution of bamed method -> Anonymous method -> Lambda, but I messed up the syntax for anonymous method and we were too pressed for time to google it.


Problem 2: "As simply as possible".
However simple as it may sound, this rule is all but simple if you start going down the road creating the wrong kind of simplicity. The TDD dogmatic anecdote is that you shall make a test pass in a manner that is as simple as possible.
This is to make sure that the unit tests enforce the functionality of the production code and that the tests truly drive the development process.

Just a week ago, I was attending TDD Masters class by Roy Osherove. I think that I grocked the concept of simplicity on the second day.

In my dependency use case, the first MMF is to make the controller start the FAN if the THERMOMETER shows a higher temperature than the desired temperature which you have set against the API of the HVAC controller instance.
Now, what exactly does simplicity mean? What should be the first test?

Does it mean to not interact with the mocked FAN?
Does it mean to not interact with the mocked THERMOMETER?
Why would it have dependencies at all? Can't you just put a temperature in and the ask it if wants to start the fan?

This started to take the form of a dug-in debate in the class, which I had to cut off.

I think that there is an important distinction to be made here. You do not program unit tests that you do not intend to make obsolete at a later time. Although the unit tests only run a concrete scenario testing a narrow case, they should still execute and pass against the final product three months from now. You still need to express the complete scenario against what you, at the time of writing, expect to become the final library API for that use case.
If you program against getters and setters and the getters only return true, you are not producing value as you plan to both throw away both the unit test and the production code that made it pass. You are also bestowing a legacy API on a brand new component.


I solve the problem as follows in my very first unit test: I create a mock fan and a mock thermometer (with a temperature), pass them both to the controller as constructor arguments and expect the fan to have been attempted to turned on.
This may seem like an advanced use case, but I think it is as simple as it can be.

People in the TDD domain also argue that you should start with constructor checks. At this point, I am not actually sure if it makes sense to be able to new up an controller without arguments, so I am actually forgoing that for the moment. If a later use case starts by newing up an empty controller, I will create a specific "IsNotNull" test.

Now, here is the correct meaning of "as simple as possible": When I confirm that the test fails, and I go into production code, I totally disregard the thermometer and I merely switch the fan on. I do not even store the thermometer reference which was passed in the constructor. This leaves people speechless. The thermometer is available, why not check it? You know that you will have to check the thermometer before starting the fan in the final product.....?
The key here is to recognize the fact that I would then be able to remove the check without breaking any tests. I would be able to introduce a bug in my code without causing any tests to fail. And if that happens, what good are they? How can I safely refactor the code if the green light does not mean anything?
Everybody agrees that I need at least two test cases to "nail" the functionality in place. And I need the if. And when people realize that, they also realize that I am not doing any extra work, I am just doing it in a different order. And the order has additional benefits because I do not have the overhead of writing unit tests after having the complete functionality in the production code.

If you write unit tests after the fact, you need to change the production code in order to see them fail. It is important that you know that the unit tests you write actually can and will fail.


To sum it up: "As simply as possible" does not mean "as stupidly as possible". You still play for keeps. The production code is "work in progress", but the unit test is supposed to be done, you don't plan on revisiting it. After all, it would be tedious process with a growing amount of unit tests if you planned on rewriting the API they all consume so that they no longer describe a valid scenario.

Wednesday, September 2, 2009

Aftenpostens artikkel om nordmennene i Kongo

(This entry is in Norwegian as it comments on an ongoing debate in Norway.)

Jeg ble overrasket over dagens oppslag i Aftenposten om Tjostolv Moland og Joshua French.

Her tar en velrenommert norsk avis og supplerer "anonyme kilder" til ett åpenbart korrupt lokalt aktorat under pågående rettergang med henrettelse som mulig utfall.

Dette blir da kilder som blir benyttet i rettsprosessen som de tiltaltes forsvarer ikke har noen anledning til utfordre eller kryssutspørre.

Aftenposten forer en korrupt prosess med beleilig ikke-etterprøvbare data. Og således tar en aktiv sentral del i en prosess som ikke er Norge verdig.

For meg later dette til å være toppen av journalistisk og redaksjonell uansvarlighet. Med mulig dødelig utfall uten reell rettergang.

Navnene på journalistene, Thomas Hubert og Sveinung Berg Berntzrød, har iallfall brent seg inn på min netthinne. Håper de er såpass sikre på sine kilder at de er beredt til å inntre som aktor, jury og bøddel.


Jeg så også tendenser til dette under narkosaken i Bolivia hvor pressen ikke har vett til å beskytte folk mot seg selv. Selv om de tiltalte i Bolivia-saken var "skyldige som bare f...", så var det ett tidspunkt hvor så godt som alle aviser sluttet å skrive om saken og nye "versjoner" fra opprørte og ivrige pårørende opphørte.

Sunday, August 30, 2009

My development Tools

In an average work week I use:

Visual Studio 2008 Team Edition: The IDE of non-choice.

Reflector: There is no excuse for not having this if you work with anything that you haven't made yourself. And even then....

Resharper: Can't live without it. If you develop in VS2005/VS2008, professional version or higher, you should waste no time in getting this. This is a plugin which makes Visual Studio what it should have been in the first place. Microsoft is usually very slow to establish accommodation for sound development practices. And even Resharper is a wee slow for being the testrunner of choice.

NDepend: I love this tool. Have you ever wondered how to manage a large code base without actually having to read every single line of code? Then this is it. Also pretty straightforward to integrate into your build to enforce architectural constraints.

Redgate SQL toolbelt: We have a lot of stuff going around SQL Server. Even though I have database edition of Visual Studio, redgate are the ones that seem to actually have experience in the problems of managing such servers.

ANTS Profiler: Wanting to get to those hard-to-find and hard-to-reproduce performance issues? With very little overhead you can have it running. And, if you are able to reproduce it, you have it. Too many performance tools require you to predict exactly when you are going to see the problem. ANTS Profiler allows you to capture all, and then zoom in on it on the timeline.



Other stuff:
* Mindmanager. (when you just want to get those thoughts down)
* Expresso. (regex helper)
* Editpad. (notepad on steroids)
* SPX from Moodysoft: (screen grabber with redlining and visual effects)
* Zoomit. Every presenter need this when sharing desktop with a projector.

The debate is over, TDD won.

I find that I have very little to add to the words of Robert Martin (Uncle Bob).

I highly recommend people to look at the available videos of him and his talks about professionalism.

Our profession as programmers is very much immature. We have hard to accommodate accountability and transparency. We have little regard of what constitutes a true "Deliverable". And, to paraphrase Ken Schwaber, we have a very odd understand of what it means to be "done".
http://www.hanselman.com/blog/HanselminutesPodcast119WhatIsDoneWithScrumCoCreatorKenSchwaber.aspx
Basically, we like to do the algorithm, but we do not like to bridge the algorithm to the world.
A nice analogy is the difference between a hobby mechanic and a professional service business.


We put juniors in our office space and we expect them to produce.
Even a person operating the fryer at McDonalds gets some sort of apprenticeship.

We resist change. We resist being scrutinized. We resist being managed. We seldom recognize business value. We resist comparison.
We do not have objective standards or engineering principles. The only professional criticism from programmer to programmer is "that's not how I would do it".

http://www.hanselman.com/blog/HanselminutesPodcast171TheReturnOfUncleBob.aspx


But there are standards to be had. Engineering principles exist. But I sincerely do not know if you will actually appreciate them if you have no experience in the industry. I have yet to see this successfully condensed into a learning experience that mentor can bestow on apprentice. Or in the class room.

Uncle Bob has the comparison of when doctors learnt to wash their hands when walking from an autopsy to the maternity ward. It took a generation change to implement it. And the problem domain was unknown to anybody fresh out of the contemporary version of "med school". One may even argue wether or not it still was the same profession before and after knowing about germs.

Even though I don't think a serious argument still exist against Test Driven Development as a whole. It is still an immature practice which a significant part of the developer community still resist.

Without a frame of reference, it is hard to separate the community from the practices. It has become a part of the expected norm that software has bugs and that software development is an inherent financial liability.

I believe and hope that this will change.

Just fifteen years ago (being careful to pick a timeframe that excludes me), programmers were stereotyped to be socially awkward, cave dwelling magic workers with questionable antics and hygiene. Approachable only by the specially trained project manager who knew better than to hint at a questionable decision or belated milestone.
Today, most project managers are quite capable of programming in MS Excel and the sacred art of programming "magic" has unraveled.
Even though I presented a caricature strawman, something has certainly changed.

I hope that they will keep changing. That technical debt, development liability, Test Code coverage, Continuous Integration and automated deployment will be common case real life business metrics KPI's (Key Performance Indicators) for the future.


The debate is over, TDD won. Will the community inertia demand another 15 years before the change is de facto?

Skepticism and mentalism.

To me, skepticism is about observing the world and then build your knowledge.

I love the fact that outspoken atheists like Richard Dawkins have started to connect with the likes of Derren Brown. I believe James Randi has a big part of that credit. I think that this was an important pragmatic step for skepticism.

I do, however, find that fellow skeptics are less comfortable with that. For example, to me, the "placebo effect" is something wonderful that we should celebrate and try to isolate and purify.
However, my peers usually find the placebo effect (is there any other human intellectual endeavour where you decide to name an effect after the one thing that you know to be irrellevant?) to be a reason for dismissal. I strongly disagree. I dismiss pretty quick the explanations given (aura, holisticism, spiritualism, deities), but I recognize skills and recipes and I am fascinated by the mere fact that the body can heal itself (or at least: stop behaving symptomatically). If I could invoke the "placebo effect" at will (without mumbo-jumbo) I would prefer that to taking a pill against normal headache.

My own blog

Somewhere along the way, I realized that I victimize my coworkers by the water cooler by bestowing my idea of "interesting conceptual problems" on them. Usually this means that the problems have no real-world application or even relevance.

The blog allows my banter to be victimless. And, since I don't yet have readers, I suppose the subjects will be all over the place.

Maybe even somewhere along the line, I find that someone is willing to give me feedback other than "huh?"


Now about me....

I am an outspoken freethinker and atheist. I pull few punches when it comes to discussing subjects. Sometime in the future, I hope to learn to listen as well... When I find time for it, that is....

I am an active athlete in underwater rugby (coach and team captain) and an avid amateur on combatives. I have done some instructing on professional application of force as well as self defence and I am pretty good.

I have a best friend who is a hypnotherapy phenom. I love discussing mentalism, hypnosis, NLP and mind states.

I am a programmer in C#.NET. That's me for the moment. Not that I don't recognize Java, functional programming or dynamically typed languages. But I find that they expand on a tangent that I'm not "into" right now. Hopefully that will change when I get good with my current programming domain.

I am well into popular sciences.

Come to think of it, I sure hope this blog engine support tags so that I wont lose .NET readers due to my banter on hypnosis. Most blogs that I follow usually are all on the same theme.

The reverese dependency problem. TDD

When building the word chain app, http://codekata.pragprog.com/2007/01/kata_nineteen_w.html , I quickly identify that there are multiple problem domains involved. One is the generic question of finding the shorter path in a mesh of connected nodes.

Then the question is: do you build this functionality with its own set of unit tests or not?

I would like to say "not". I don't want to take "time out" from my real business problem to build a different class library in some BDUF venture. I want to refactor into this design.

However, let us say that I proceeded with this, and elegantly refactored a utility class library that was generic and succinct.

And then this class library is so generic that I have more client apps that should be consuming it. And new clients warrant more functionality.
I may not be able to express new client requirements from my wordchain problem, and therefore I need to TDD from the unit tests of the new client apps.

However, then the class library, slowly becomes "dependent" of a growing number of its clients, because that is where its unit tests are. It wont get coverage unless you run the unit tests of its clients.

I realize that at some point you must recognize that the new class library is a new "deliverable" and thus you need extract its own tests. This will however result in duplication and longer release cycle for the word chain app, because it is a separate release item. New word chain app requirements will have longer development time because new requirements to the class library must be TDD'ed through unit tests of the class library, parallel to the unit tests of the word chain app.

Theoretically, you should have 200% percent coverage in the class library. One "100" through the sum of its clients unit tests, and one "100" through its own unit tests.

Interesting problem. It becomes sort of a reverse dependency problem. Stabile components become dependant of unit tests of less stabile components.


Do you find that you get test duplication because you don't want some internal class libraries to be "depending" on the unit tests of its consumers?