Sunday, December 20, 2009

Longer case in favor of setter injection

The motive of this post is not to prove that setter injection is better than constructor injection. But being setter the underdog I am surprised how much it has helped in defining my style of development. So despite agreeing that is safer to always create a sound object, I get to write less production code and more expressive tests with the setter strategy.

The previous post laid the ground rules and general ideas behind why setter injection has simplified my code. Looking into Fubu MVC's code I found a nice example to illustrate what I mean. It's kinda funny because it was a post by Jeremy Miller what inspired me to look into Fubu's implementation. In that post he evens jokes about Chad using setter injection.

Here is the Test




[TestFixture]
public class
execute_one_in_one_out : InteractionContext<OneInOneOutActionInvoker<ITargetController, Input, Output>>
{
    private
Input theInput;
    private
Output expectedOutput;


    protected override void
beforeEach()
    {
        
Func<ITargetController, Input, Output> func = (c, i) => c.OneInOneOut(i);

        
Services.Inject(func);

        
theInput = new Input();
        
expectedOutput = new Output();

        
MockFor<IFubuRequest>().Expect(x => x.Get<Input>()).Return(theInput);
        
MockFor<ITargetController>().Expect(x => x.OneInOneOut(theInput)).Return(expectedOutput);

        
ClassUnderTest.Invoke();
    }

    [
Test]
    public void
should_have_stored_the_resulting_data_in_the_fubu_request()
    {
        
MockFor<IFubuRequest>().AssertWasCalled(x => x.Set(expectedOutput));
    }

    [
Test]
    public void
should_invoke_the_controller_method()
    {
        
VerifyCallsFor<ITargetController>();
    }
}




and the Code




public class OneInOneOutActionInvoker<TController, TInput, TOutput> : BasicBehavior where TInput : class
                                                                                    where
TOutput : class
{
    private readonly
Func<TController, TInput, TOutput> _action;
    private readonly
TController _controller;
    private readonly
IFubuRequest _request;
    
    public
OneInOneOutActionInvoker(IFubuRequest request, TController controller,
                                    
Func<TController, TInput, TOutput> action)
    {
        
_request = request;
        
_controller = controller;
        
_action = action;
    }

    
// TODO:  Harden against failures?
    protected override DoNext performInvoke()
    {
        var
input = _request.Get<TInput>();
        
TOutput output = _action(_controller, input);
        
_request.Set(output);

        return
DoNext.Continue;
    }
\




Here is how I would write Test




[TestFixture]
public class
when_executing_one_in_one_out : BehaviorOf<OneInOneOutActionInvokerWithSetters<ITargetController, Input, Output>> {
    
    [
Test]
    public void
should_have_stored_the_resulting_data_in_the_fubu_request() {
        var
Input = new Input();
        var
Output = new Output();
        
        
Given.Action = (c, i) => c.OneInOneOut(i);
        
Given.Request.Get<Input>().Is(Input);
        
Given.Controller.OneInOneOut(Input).Is(Output);
        
        
When.PerformInvoke();
        
        
Then.Request.Should().Set(Output);
    }
}




and the Code




public class OneInOneOutActionInvokerWithSetters<TController, TInput, TOutput> : BasicBehavior
    where
TInput : class where TOutput : class {
    
    public
Func<TController, TInput, TOutput> Action { get; set; }
    public
TController Controller { get; set; }
    public
IFubuRequest Request { get; set; }
    
    public override
DoNext PerformInvoke() {
        var
Input = Request.Get<TInput>();
        var
Output = Action(Controller, Input);
        
        
Request.Set(Output);

        return
DoNext.Continue;
    }
\




In both cases the TextFixture inherits from a base class. This is the Testcase Superclass pattern from Meszaros book. The idea is to establish a context for the SUT and simplify DI and mocking. In Fubu, InteractionContext provides a facade around Rhino Mocks enabling automocking and simplifying expectations calls. It also wraps the SUT with ClassUnderTest.

In my case BehaviorOf is the same pattern built into FluentSpec. The SUT dependencies are automocked and the SUT is wrapped. But instead of accessing the SUT through ClassUnderTest it's accessed through Given/When/Then connectors. As a result I would not need to implement InteractionContext.

Fubu's test was refactored into a common setup pattern which enables to have a single line per test and that line is an assertion. That is test nirvana. However the second test case does a VerifyCallsFor<TestDouble> which is a test smell. Therefore my test has a single assertion and there was no point in refactoring to common setup.

Because of the public dependencies instead of.




Func<ITargetController, Input, Output> func = (c, i) => c.OneInOneOut(i);

Services.Inject(func);
MockFor<IFubuRequest>().Expect(x => x.Get<Input>()).Return(theInput);
MockFor<ITargetController>().Expect(x => x.OneInOneOut(theInput)).Return(expectedOutput);

MockFor<IFubuRequest>().AssertWasCalled(x => x.Set(expectedOutput));




I can write.




Given.Action = (c, i) => c.OneInOneOut(i);
Given.Request.Get<Input>().Is(Input);
Given.Controller.OneInOneOut(Input).Is(Output);

Then.Request.Should().Set(Output);




Which is simpler, more expressive and no lambdas or brackets were harmed in vain. Also notice how clear is the separation between the test steps Setup/Exercise/Verify mapped to Given/When/Then than with the AAA syntax.

The Exercise step changed from ClassUnderTest.Invoke() to When.PerformInvoke(). In this case When is a win over ClassUnderTest but making PerformInvoke public is a loss. The tradeoff in my case boils down to focus. I want to exercise the essential amount of code related to the behavior being tested. So calling Invoke goes through different path of the code that I don't care while defining when_executing_one_in_one_out.

VerifyCallsFor<ITargetController> was not needed because it is ensuring that the expectations were executed for ITargetController. In this case MockFor<ITargetController>().Expect(x => x.OneInOneOut(theInput)).Return(expectedOutput) is a query. That happens mostly as a misuse of AAA syntax with Query/Command. Queries belong to the setup step and it feels silly to check AssertWasCalled on a query statement.

Queries can be verified either by checking affected state in the SUT, via the return value in a delegated query or because their result becomes an arg in a command call. We don't need to verify that Controller.OneInOneOut(Input) occurred because that is what ensures that Then.Request.Should().Set(Output) Given.Controller.OneInOneOut(Input).Is(Output).

Encapsulation is one of the fundamental concepts that makes OOP work. Sadly, it reinforces the human feeling of protection. And we do the most silly things in the name of protection. By letting go on class encapsulation in favor of abstract dependencies I have developed a style that leads to cleaner tests. Setter injection has opened the opportunity for the dependencies to join fluently in the chain of specifications.

I'd rather have everything but that's impossible. The tradeoffs you make are related to your personality. Because I favor expressiveness a lot more than security I was able to suspend beliefs and let go. I am much happier with more expressive tests than with the security that constructor injection offers. But that's me, you have your own rules to make, break and follow.

Thursday, November 26, 2009

Small case in favor of setter injection

In general setter injection is frowned upon. The main reason is mostly that an object should have all its dependencies at the creation time. That is very reasonable, but I find it a tad on the paranoid side. Or better put, I have been using setter injection only and have not found any problem ever. Furthermore, by having setter injection my objects and test code has remained cleaner than the alternative options.

Here are the rules

  • Depend only on interfaces
  • Don't use new
  • Don't define constructors in the class

Depending always on interfaces sounds extremist. But it happens naturally if a system is developed outside-in mocking the dependencies. The principal advantage of mocking is increased focus at the time of writing the code, the decoupledness is a side effect. Another side effect is that the interface becomes the public interface of the class.

Therefore making things public in the class doesn't break encapsulation. I do confess that having more public and virtual elements in the class obscures the understanding of what the class offers. But the interface is right there and it's easy to see what the class is exposing to others thru it.

Depending only on interfaces needs to be accompanied by forbidding the use of new. If an object has a dependency say "Command ToSave;" and does "ToSave = new SaveCommand();" to create it. The class would have a dependency on both the interface Command and the class SaveCommand.

Once new is forbidden there are two ways to create a class, either by having a factory object or by using a DI framework. Both approaches would lead to the decision of setter injection vs constructor injection. I chose setter because the class ends up cleaner that way by not needing constructor logic.

There's no way of generating an object on an unstable state because the dependencies are either injected by a framework or they had to be passed as args to the factory method. There's a backdoor to the object via new but it is simply forbidden and conveniently available to the test code.

The test code becomes simpler because we need to mock only the dependencies involved on a particular behavior. It is a smell to have multiple dependencies and only some of them needed at a given time but it happens. Even if it's only at some stage before the object is refactored into a better design.

Sunday, August 16, 2009

The Rules of Coolban

1. Do not talk about Kanban

2. Do not talk about Coolban

3. Every rule can and should be challenged

4. A story is ready to Code when the acceptance tests are done

5. A story is ready to Test when the acceptance tests are green on QA environments

6. A story is in Build after being preapproved

7. A story is Done after being demoed on an integration environment

8. Every story has a sponsor, if you are the sponsor, you should pave the story's way to Done

Saturday, August 15, 2009

The Passionate Programmer

The fact that I published this proves how valuable is "The Passionate Programmer" by Chad Fowler. I am applying his advice of "Let your voice be heard" from the chapter "Marketing... Not just for suits". I already do roughly 80% of the things recommended in the book to guide you in a path of a great career. But what makes it unique is the fact that I agree with 100% of what it's saying and it pushes me to do some of that 20% I keep chickening out of doing.

If you already have a great career you would read this book at a Dreyfus  level of Expert. In such case I can't really tell how the expert to expert transfer of knowledge goes. You might enjoy the anecdotes, especially when they lead to metaphors back and forth between music and programming. It helped me do retrospectives on misbehaviors like in "Learn to love maintenance", so you might map errors in early years like when I desperately wanted to leave my team in 2008 to escape from legacy code.

The book is the second edition of "My job went to India: 52 Ways to save your job" and it was refactored to read more like 52 advices to make your software development career shine. It still has a lot of references to India since the author lived and worked there. For instance, the warning about "The South Indian monkey trap" is an eastern version of the boiling frog from "The Pragmatic Programmer". I do think however that it's more cruel because they actually do that to monkeys.

The book closes with the advice to "Have Fun". "Software development is both challenging and rewarding. It's creative like an art-form, but (unlike art) it provides concrete, measurable value". If you've chosen to become a software developer this book reassures you why you should feel lucky and offers advice on how to steer you career.

I didn't choose to be a software developer, I am a natural born programmer. I am not a geek, I just happen to have natural inclination to think in terms of programmable algorithms. That is what I enjoy the most and it feels as creative and cool as to be a rock n' roll guitar player. This book is about how to make it to the big gigs and perform at the highest level in your profession. To become a rock start doing what you love, programming.

Tuesday, July 28, 2009

Coolban: Part 3 – Welcome to Coolban

The first rule of Coolban is "you do not talk about Kanban". The second rule of Coolban is "you do not talk about Coolban". We are not about labels and fashions. We are not about process fondness an methodology zealotry. We want to be highly profitable for the enterprise, professionally accomplished as individuals and harmonically gelled as a group.

Blindly following anything is a recipe for disaster. Every agile methodology warns about adapting principles and practices to your context. We are lucky to have a mix of experienced agilists with energetic journeymen. That's why having a custom process was a no-brainer. We are taking any idea that seems to help us achieve our goals, experimenting, reflecting and adapting it accordingly.

Coolban is an experiment inspired on Kanban and bounded by Enterprise Scrum rules. Lean is meant to reach the whole enterprise but it's also common to have Scrumban as a transitional step. Our biggest impedance with the system so far has been on the strategy/business side. It could be due to the inherent lack of time to build trust by a newborn team along with human resistance to changes.

These impedances provide context and opportunity for adaptations. For instance, we plan and estimate biweekly. During these sessions we also write high level acceptance tests. The POs are not actively involved in the generation of the tests or in the pull mechanics of Coolban. They just want to see the story follow the workflow in Jira like every other team does.

We put the sprint stories on the story cloud on top of the Coolban with the estimate on them. Once a slot becomes available on the Ready stage the story can enter the stream. After all the planned stories are pulled from the cloud we request to bring a story into the sprint. However the PO might consider we have too much WIP and not feel comfortable with a safe sprint completion. At that point we simply pull from the prioritized backlog without officially being committed by Scrum rules.

The flow rules are also synchronized with Jira's workflow. We set the story to In Process before it's moved into Code, to In Build before the Build stage and to the done spike after is Closed in Jira. The tasks for dependencies are tracked with stickers. And we are also planning on attaching a picture of the team member Assigned To the story.

Secretly yes, we want to influence the enterprise. We are set to propagate and participate on initiatives to continuously improve the system as a whole. We invite you to do the same within your bounded context while keeping the harmony in the system. Welcome to Coolban, if you are not content with the status quo, you have to fight.

Coolban: Part 2 – Meet the Column

Picasa Content
Picasa Content

The Kanban Board is usually, well, ehem, a board. However, due to our team's room excess of shelving we are left with only one whiteboard in one wall. A quick survey with facilities brought more impediments to our walls emancipation, if we remove the shelves they need to get painted, which would happen over a weekend after the approval of the project.

That put my R-Mode to the task of finding alternatives bringing the focus to the column that splits our window in two halves. The Autobots that inhabited the place had a tiny cork board right there which they graciously took away along with all the other boards before we moved in. At least I can thank them for the inspiration.

Getting a table in a column is at first challenging. I thought of using all faces and having a wrap around table. The other logical choice was considering rows instead of columns for each stage of our process. Now I had a clear picture in my head. To make it real I went raiding for office supplies and found post-it, clear tape and to my surprised colored paper.

The next decision was about the direction of the flow, either top-down or bottom-up. Top-down seems more natural, giving the notion of gravity to the board. I can even dream of the possibility of needing just gravity to get the stories done.

Each row should have a consistent design that most be flexible, uniform and pretty. So we used a color per row for the title and the kanbans (slots where the story can be placed). It came out so pretty that there was no need to specify the capacity per row. It can be seen by adding story cards and empty kanbans.

The first stage represented was obviously Code with blue. It could also be called Development but that kind of diminishes the importance of testing. For this stage we set two available slots, our team has three programmers so this will ensure pairing. It is also advised in general to set the limits below 100% of the capacity. It is proven that slowdowns happen well below 99% of usage.

The stage that followed was Testing colored in Green. With two Testers on the team it was an easy decision to allocated only one slot. It looks like the testers could take more but so far the programmers have been slower. So even if they have more capability the constrain of 1 seems more reasonable. With this we have set a limit of 3 stories in progress (WIP).

It is also important to notice that Kanban is about the whole team collaborating. It exploits the specialized knowledge of each team member in its corresponding stage. While expanding their general knowledge due to the interaction forced by the limits. For instance, a tester could pair with a developer while the other takes care of the story on the Test stage.

The next stage was Build ironically painted in red. A story that made it this far is considered very low risk for the team given that we practice CI. We gave it an allowance of 3 and decided that a story would make it here after it's preapproved by our PO. It would only leave this stage and the board once the story makes it into the integration environment, it's demoed and closed.

The only missing stage is Ready in yellow. This is the first stage in the process with a limit of 2 stories. It is fed from a cloud of stories that the team committed to get done during the sprint. It is meant to clear the stories dependencies from other teams and leave them ready for coding with their corresponding acceptance tests.

There's also an additional row at the bottom of the table colored in pink for Problems. We have no limits for this row and it's not a stage of the flow. It is there to offer visibility of team impediments. I am looking forward renaming it to Kaizen signaling ideas for improvement. But so far they feel more like problems.

At this point you might wonder where is the Done stage, where's the champagne? We are all about celebrations but a wall fill of done stickers is a luxury we cannot afford. I would consider it a waste of space regardless. To minimize space and keep the trophies at sight we are looking for a spike but they seem a bit too retro and hard to find.

There are rules or control checks to allow a story move from one stage to another. It can go from Ready to Code once it has defined the acceptance tests and cleared any external dependency. From Code to Test when it's available in the sandbox and passes the acceptance tests. From Test to Build when the PO has preapproved it. And out of the column after the story is deployed, demoed and closed.

At any given time a story might get a red or green sticker signaling a special condition. Red stickers signal unplanned events like requests that were not flushed out in the Ready stage. Green stickers are for expected temporary events like in what build number is expected to get integrated into the mainline. This increases the visibility of impediments making it easier to track and resolve them.

With the column in place the buzz started within the team. Some called it information radiator or visual board or simply kanban board. Initially I called it Kanban Column, which evolved into Colban. Once the name Colban was proposed in the team chat the first reply was "cool". And the term Coolban was adopted, because, well, it's cooler.

Coolban: Part I – Enter Kanban

Today we took a firm step in the path of Lean thinking. Kanban is this cool new methodology or process that seems too fashionable to be ignored. Of course besides that obvious reason we have a stronger motive to follow the cool kids. Our newly created team is swamped with latency due to several internal and external challenges.

Latency is the enemy of flow and takes a special toll in the souls of well-meaning humans. In other words, wasting time is not only bad for businesses, it's even worse for employees that want to shine and feel productive. This became obvious to me when a team member noticed that I was not as positive as usual. Here comes Kanban to the rescue!

Not really, there are no silver bullets. But as long as there's spirit and desire to improve there will always be some idea that can be tried to overcome obstacles. In this case we want to get our stories done with a minimum of interruptions. Therefore making both, the stories and the impediments, visible all the time leaves little room for them to hide.

We could improve in communication, tracking Jiras, being more proactive emailing others teams, bringing impediments forward in SoS meetings. None of that beats the physical board and I learned it just today. Why? Because now the stories are alive, it's a completely different level of interaction, virtual reality at its bests.

For those that have already painted walls with post-it this is old news. So Kanban must go beyond letting you see and feel your process. It must encourage you to improve it having fun along the way. It should gradually increase the team awareness of the way they work and interact with each other. We experienced some of that in just a day, but first lets introduce the Column.

Monday, July 27, 2009

Why another mock framework?

FluentSpec was announced as a BDD framework trying to avoid mock controversy, in particular the one caused by test doubles. But a Mock is both, a test double and a development practice, there’s no way to avoid the term. And BDD is mainly a communication practice that goes far beyond using the GWT syntax to write unit tests. FluentSpec is better described as a mock framework with a BDD flavor.

There are multiple mock frameworks in C#: RhinoMocks, Moq, NMocks and TypeMock. I started with NMocks and switched to Rhino. At that moment the main reason for the switch was that NMock was using literals to setup calls. RhinoMocks kept me happy until my style of development required more isolation of the SUT. Isolation is the specialty of TypeMock, but I think it goes too far allowing the usage of Band-Aids in places where we should stitch.




[TestClass]
public class
when_defining_a_complex_method {

    [
TestMethod]
    public void
should_split_in_simpler_methods() {

        var
Mocks = new MockRepository();
        var
Subject = Mocks.PartialMock<Subject>();

       
Mocks.ReplayAll();

       
Subject.ComplexMethod();

       
Subject.AssertWasCalled(x => x.DoASimpleMethod());
       
Subject.AssertWasCalled(x => x.DoAnotherSimpleMethod());
    }
}

public class
Subject {

    public void
ComplexMethod() {

       
DoASimpleMethod();
       
DoAnotherSimpleMethod();
    }

    public virtual void
DoASimpleMethod() {
        throw new
System.NotImplementedException();
    }

    public virtual void
DoAnotherSimpleMethod() {
        throw new
System.NotImplementedException();
    }
}




My conflict with RhinoMock can be expressed with this test




[TestClass]
public class
when_defining_a_complex_method : BehaviorOf<Subject> {
    
    [
TestMethod]
    public void
should_split_in_simpler_methods() {
        
       
When.ComplexMethod();
       
Should.DoASimpleMethod();
       
Should.DoAnotherSimpleMethod();
    }
}




This test throws System.NotImplementedException() which might be really easy to fix by removing that line. However, in terms of isolation it implies that when I wanted to test only the ComplexMethod I also ran the other dependent methods. Running the depended methods might lead to a failure at another level of abstraction. It could be possible to abstract that call in a dependency. But then I would not need a PartialMock and I would not be ripping the full benefits of the Composed Method pattern.

The test could be written in FluentSpec and would pass despite the throw clause.

That is in essence the reason why I needed another mock framework and why I could not wrap it around an existing one. The other reasons are also shown in this example.

.: Simplifies test setup




// with RhinoMocks
var Mocks = new MockRepository();
var Subject = Mocks.PartialMock<Subject>();
Mocks.ReplayAll();

// with fluentspec
BehaviorOf<Subject>




.: Doesn’t have test doubles like Stub, Mock or PartialMock. It has only a TestObject

.: Avoids repetitions of SUT references like




// with RhinoMocks
Subject.ComplexMethod();
Subject.AssertWasCalled(x => x.DoASimpleMethod());
Subject.AssertWasCalled(x => x.DoAnotherSimpleMethod());

// with fluentspec
When.ComplexMethod();
Should.DoASimpleMethod();
Should.DoAnotherSimpleMethod();




.: And doesn’t need lambdas or delegates to setup and verify calls

About

Hi,
  I am Mike Suarez, a natural born programmer. With this project I expect to improve my craft and consequently ease my life. I hope somehow this website extends beyond me and becomes helpful to others. Not just because I want to do good, but because you are not that good until you help make others better. And if that happens, this blog could proudly claim to be a programmer’s blog.