0 Comments

Fellow developers, this is a call to arms! Too often I see developers who immediately cave to external pressure to “just get it done”. “Just” is a dangerous word when it comes to software development anyway. Don’t do it! Stand up for your code, don’t compromise on quality just because someone is breathing down your neck. Take a few moments to think about the impact of what you are being asked to do, both to you (as the current developer) and to others (mostly future developers).

No-one wants to work inside crap code, but people seem to be more than happy enough to create it under pressure. Sure you might think that you’ll just fix it up next time, but I’ve found that that hardly ever happens.

Do you think that overbearing project manager is ever going to have to suffer through maintaining or refactoring code that “there wasn’t enough time to test” or that is unclear as a result of “we need to just get this done now, we’ll refactor it later”? Nope. The project managers job is to get the project to some previously decided definition of done and they may be more than willing to sacrifice fuzzy things like “code quality”, “readability” and sometimes even “test coverage” or “scalability” in exchange for things that they are directly measured against like “deadlines” and “fixed scope”.

The representatives of the business will fight hard for what they believe is important. As a developer you should fight just as hard for the code. If the business representative is fighting for the “just get it done” point of view, you should fight just as hard for quality, readability, maintainability, cleanliness, and all of those other things that good code has. Stand up for what you believe in and make sure that you’re creating code that is pleasant to work with, otherwise you're going to start hating your job, and nobody wants that to happen.

I’m not prescribing that you just do what you want. That’s silly. You can’t just spend 3 months coming up with the most perfect architecture for solving problem X, or you might be surprised when you come up for air just in time to get made redundant because the organisation doesn’t have any money left.

What I am prescribing is that you fight just as hard for the code as other people fight for other things (like deadlines, estimates, contract sign-off, scope negotiation, etc).

As far as I can see there are two approaches to championing your code.

Communicate

Never attribute to malice that which is adequately explained by stupidity.

Robert J Hanlon

You (as a professional developer) know why quality is important, but other people in the organization might not. They might not realise the impact of what they are (either implicitly or explicitly) asking you to do.

You need to communicate the importance of taking the time to create quality to the people who can actually make decisions. Of course, you need to be able to talk to those people in the language that they understand, so you need to be able to speak about things like “business impact”, “cost of ownership” and “employee productivity”. Its generally not a good idea to go to someone in Senior Management (even if they are technical) and start ranting about how the names of all of your classes and methods look like a foreign language because they are so terrible.

If your immediate superior doesn’t care, then go over their head until you reach someone who does. If you don’t find someone who cares, look for a better job.

Obviously be careful when doing this, as you could land yourself in some very awkward situations. At the very least, always be aware that you should show as much loyalty to your organisation as they would show you.

Do It Anyway

The conventional army loses if it does not win. The guerrilla wins if he does not lose.

Henry A Kissinger

If you feel like you are trying to communicate with a brick wall when it comes to talking about quality, you might need to engage in some “guerrilla craftsmanship”. Don’t explain exactly what you are doing in the code, and keep following good development practices. If you’re asked for estimates, include time for quality. Get buy-in from the rest of your fellow developers, create a shared coding standard. Implement code reviews. Ensure that code is appropriately covered by tests. To anyone outside the team, this is just what the team is doing. There’s no negotiation, you aren’t asking to develop quality solutions, you’re just doing it.

When that external pressure comes around again, and someone tries to get you to “just get it done”, resist. Don’t offer any additional information, this is just how long its going to take. If you are consistent with your approach to quality, you will start delivering regularly, and that external pressure will dissipate.

Be mindful, this is a dangerous route, especially if you are facing a micro-manager or someone who does not trust developers. As bad as it sounds, you may need to find someone to “distract” or “manage” that person, kind of like dangling some keys in front of a baby, while you do what you need to do.

Conclusion

Fight for your code.

No one else will.

0 Comments

One of the most common ways to communicate the principles of the various Agile software development methodologies is games. I like to think that this is because the Agile software development methodologies are more like games than the others, focused more on the team, the overall goal and responding to change, rather than following some meticulously planned process that “guarantees” success. Jeff Patton presented a nice talk that incorporated those elements at his Yow! 2013 talk (which is well worth watching).

I’m sure there are a lot of games that people use to teach the principles of Agile development methodologies, I’m aware of (and have run) two, Scrum City and the Lean Manufacturing Game (which might actually be called the Lego Lean Production Game). I ran both of these games in the workshops for the new Agile Project Management course at QUT (Semester 2 2014, INB123, replacing the Prince 2 Project Management course), and they proved to be quite effective in communicating critically important elements of Agile to the students. It helped that they were also fun and very physical, which helps engagement.

A fellow tutor for the Agile Project Management course (Andrew McIntyre, the great and powerful) came up with the idea for another game, teaching elements of Scrum using Sudoku puzzles. The other tutors and I helped him flesh it out a bit, and we then used our poor students as test subjects to see how it worked out in reality.

It was awesome! Also hilarious.

And thus Scrumdoku! (don’t forget the exclamation point) was born. At a high level, the game is focused around teaching planning and the optimisation thereof, where teams have to maximise the value of delivered Sudoku puzzles over some number of iterations.

Materials

50+ individual Sudoku puzzles.

Standard 9x9 is easiest. I found the generator available at http://www.opensky.ca/~jdhildeb/software/sudokugen/ to be perfect for this. You’ll want about 60% Easy, 20% Medium, 15% Hard and 5% Very Hard. I needed 4 copies of each puzzle, as my workshop had 4 teams of approx. 8 people each. Make sure you get solutions.

Pens/pencils, depending on how confident the teams are. One for each person.

Erasers (highly recommended). Again, one for each person.

Willing Participants (obviously).

Rules

3 Iterations. I used 7 minutes planning/retrospective, 20 minutes work. Make sure you timebox aggressively.

Make sure you leave enough time to actually complete a Sudoku puzzle. 20 minutes felt a little long, so maybe try 15? Experiment and see what feels best for you.

During planning each team will commit to some set of puzzles, of their choosing.

Completed and correct puzzles are worth an amount of points relative to difficulty.

The generator I linked has Easy, Medium, Hard, Very Hard. I used 2, 5, 8 and 13 for value.

Incomplete or incorrect puzzles are worth nothing, and are lost forever.

Participants may choose to mark a puzzle as “Must Have”, in which case it will be worth double points if completed. However, if not completed or incorrect it will be worth double negativepoints.

No electronic solvers.

There’s some amazing Sudoku solvers available on phones/tablets now. Just snap a picture and the solver will tell you what numbers go where. Speaking of Sudoku solvers, Peter Norvig has a fantastic article on writing a Sudoku solver. Go read it, its great.

Optional Rules

These rules can add a hilarious…I mean “educational” element of stress to the game. Use as you see fit.

  • Incorrect puzzles previously delivered can be reintroduced later as “bugs”, which must be fixed before any other work, and are worth nothing.
  • Reserve some of the puzzles for “expedite” challenges. During the middle of an iteration throw one of these puzzles at the team, and tell them that it must be completed before the end of the iteration or their score for the iteration will be zero.
    • This can have some interesting side effects, as if the team has over-committed with a large number of “Must Haves” they can choose to intentionally fail the expedite challenge and zero out their scope, saving them from a large negative score. One of my teams actually did this, and I totally didn’t see it coming.

Instructions

Make sure to setup one area for each team involved in the game. Separate the planning area from the working area, like in the following (terrible) diagram.

image

Note that where I have said “puzzles”, that’s the planning area.

Start by introducing the teams to the rules (not the optional rules though, they shouldn’t see them coming, for maximum effect). Give them maybe 5 minutes to discuss and ask you questions. Don’t spell everything out, let them be responsible for getting to the detail by encouraging good questions.

At that point, show a nice visible timer (http://www.online-stopwatch.com/ isn’t bad) with 7 minutes on it, and tell them their planning time has started. No puzzles can be started until the planning time has finished. Note down what each team has committed to, and which puzzles (if any) are marked as “Must Have”. I found it most helpful to mark the puzzles themselves, much easier to keep track of later when marking, but make sure you also note down their commitments in some public place, like a whiteboard. I used a simple table on a white board, and a simple notation of:

[Puzzle Number] [Difficulty] [Optional: Must Have] [Correct]

27 M ! [Tick] would indicate that puzzle 27, which was a Medium, was marked as a Must Have and was correct.

image

Once planning has completed, start the timer for execution (20 minutes is pretty good) and let them get started.

Once that timer expires, collect all of the puzzles (regardless of state) and start the timer for 7 minutes for planning and retrospective. While the teams are planning, mark the puzzles, and note down the cumulative score for each team in some publically visible place. The quicker you do this, the quicker the teams can have feedback on how they are doing. Ideally, it should be before their planning finishes, although even without marking the teams will have some idea about how they went.

Planning/retrospective time up, iteration time start.

Rinse and repeat until done.

Have a prize for the team with the best score. Its always a fun note to end the activity on.

Observations

As with everything in life, people are bad at estimating, even when the estimating is implicit. Teams will almost certainly massively over-commit in their first iteration and then commit to a more reasonable amount in the second, quickly establishing what they are able to accomplish in the time provided. The interesting point here is that Sudoku puzzles are basically the same, even though they vary in difficulty. After the first 2 iterations, the teams will probably still not be all that great at estimating, but they will improve. Software, unfortunately, rarely conforms to the same sort of pattern as the Sudoku puzzles, which means that you may never see software development teams be able to make accurate estimates. I’ve been trying to estimate software for years now, and while I’m definitely better than I used to be, the most important element to delivering on time, is to have flexibility in what you deliver, not in getting the estimates perfectly accurate.

Other observations:

  • You may see a participant assume a command and control role, deciding what will be done and assigning out work in the first iteration, which may or may not work.
  • Some teams will establish themselves a mini-backlog and work through that in priority order (those teams are awesome).
  • People may try to cheat, by saving partially completed puzzles in one iteration so they can continue to work on them in the next (that’s why you note down the committed puzzles publically).

Summary

Like most of the various Agile training games, the game allows for two main outcomes.

One, it teaches Agile principles to the participants (which you should reinforce at the end with a summary). The participants don’t even need to know the principles beforehand, the activity itself can be a great way to introduce them.

Two, it allows you to watch participant behaviour. Watching the way people react to situations can tell you a lot about them, which you can then leverage later on. If you play this game after introducing Agile concepts, you will easily be able to see those who took them to heart (in comparison to the people who just fall back to their old ways).

Also its fun to watch people squirm.

0 Comments

I love Dependency Injection.

I’ve only really been doing it for the past year or so, but I’ve noticed that smart usage of dependency injection makes code more loosely coupled, easier to change and easier to test.

Doing dependency injection well is hard. I highly suggest reading Dependency Injection in .NET. Actually, read that book AND read Mark Seemans’ excellent blog as well.

That last bit about classes designed with dependency injection being easier to test is kind of correct. They are certainly easy to isolate (for the purposes of unit testing), but classes that are designed with dependency injection should have their dependencies supplied during object construction (Constructor Injection).

The downside of using Constructor Injection, especially combined with Test Driven Development, is that your constructor is probably going to change quite a lot as you are developing. This, of course, has an impact on your unit tests, as depending on how you are instantiating your object, you may have to change quite a number of lines of code every time you make a change.

Annoying.

Tests shouldn’t be onerous. Yes making a change to a constructor will probably have an impact on a test, but that impact should come out in the test results, not during the compile, because its the test results that show whether or not the impact of the change was meaningful. Also, getting hundreds of compiler errors just because you changed a constructor is kind of morale crushing.

First Attempt

The obvious solution to the problem is to factor out the construction of your object into a method in your test class.

[TestClass]
public class AccountsRepositoryTest
{
    [TestMethod]
    public void AccountsRepository_SearchByMember_ReturnsOnlyMemberAccounts()
    {
        var expectedMember = "285164";

        var accounts = new List<Account>
        {
            // .. bunch of accounts here
        };

        var accountsPersistenceSubstitute = Substitute.For<AccountsPersistence>();
        accountsPersistenceSubstitute.Retrieve().Returns(accounts);

        var target = CreateTarget(accountsPersistence: accountsPersistenceSubstitute);

        // .. rest of the test here
    }

    private AccountsRepository CreateTarget(AccountsPersistence persistence = null)
    {
        if (acccountsPersistence == null)
        {
            var accountsPersistence = Substitute.For<AccountsPersistence>();
        }

        var target = new AcountsRepository(accountsPersistence);
        return target;
    }
}

This is much better than riddling your tests with direct calls to the constructor, but its still an awful lot of code that I would prefer to not have to write (or maintain). It can start getting pretty onerous when your class has a few dependencies as well.

TheresGotToBeABetterWay

There’s got to be a better way!

Second Attempt

Well, there is. One of the reasons why Inversion of Control Containers exist is to help us construct our objects, and to allow us to change our constructors without having to change a bunch of code (yes there are many other reasons they exist, but creating object graphs is definitely one of the bigger ones).

Why not use an IoC container in the unit tests?

What I do now is:

[TestClass]
public class AccountsRepositoryTest
{
    [TestMethod]
    public void AccountsRepository_SearchByMember_ReturnsOnlyMemberAccounts()
    {
        var expectedMember = "285164";
        
        var accounts = new List<Account>
        {
            // .. bunch of accounts here
        }

        var accountsPersistenceSubstitute = Substitute.For<AccountsPersistence>();
        accountsPersistenceSubstitute.Retrieve().Returns(accounts);

        var kernel = CreateKernel();
        kernel.Rebind<AccountsPersistence>().ToConstant(accountsPersistenceSubstitute);        

        var target = kernel.Get<AccountsRespository>();

        // .. rest of test
    }

    private IKernel CreateKernel()
    {
        var kernel = new NSubstituteMockingKernel();
        return kernel;
    }
}

Much better. Leveraging the power of the Ninject IKernel, along with the NSubstitute MockingKernel extension allows me to only have to implement a small amount of code in each new test class (just the simple CreateKernel method). From the example its not immediately obvious what the benefits are, because I’ve had to write more code into the test method to deal with the kernel, but this approach really comes into its own when you have many dependencies (instead of just one) or your constructor is changing a lot.

Pros:

  • The test methods lose no expressiveness (they still state the dependencies they need control over and rebind them as necessary).
  • I’ve dropped all of the annoying boilerplate code that sets up substitutes for all of the dependencies that I don’t care about.
  • I don’t need to deal with a method in each test class that is essentially a surrogate constructor (which will need to change every time the constructor changes).

Cons:

  • I’ve hidden the pain that can come from having an class with many dependencies. This is a good pain, it tells you that something is wrong with your class and tests are one of the easiest places to feel it.
  • The test projects are now dependent on the IoC container.

I think its a worthy trade-off.

Special Kernels and Modules

One of the great things about Ninject is the ability to describe Modules (which contain binding information) and Kernels (which can load certain modules by default, and provide bindings of their own).

If you have certain bindings that need to be valid for your unit tests, or configured in some specific way, you can create a specific UnitTestModule that contains those bindings. I usually combine this with a UnitTestKernel, which just loads that UnitTestModule by default (just so I don’t have to manually load it in every CreateKernel method).

A good example of a use case for this is a project that I’m working on in my spare time. It is a WPF desktop application using the MVVM pattern, and makes use of TaskSchedulers to perform background work. I say TaskSchedulers plural because there are two, one for background threads and one for the main thread (to commit results to the View Model so that the UI will correctly refresh the bindings).

Unit testing code that involves multiple threads at its most basic level can be extremely difficult, especially if the background/foreground work is encapsulated well in the class.

This is where the UnitTestModule comes in. It provides a binding for the pair of TaskSchedulers (background and foreground) which is an implementation of a single threaded (or synchronous) TaskScheduler. This means that any background work happens synchronously, which makes the View Models much easier to test. You wouldn’t want to repeat this binding for every single View Model test class, so the UnitTestModule (and thus the UnitTestKernel) is the perfect place for it.

TaskSchedulers are great, precisely for the reason that you can provide your own implementations. I’ve used a CurrentThreadTaskScheduler and even a ManualTaskScheduler for the purposes of unit testing, and it really does make everything much easier to test. Of course, the implementations of TaskScheduler that are used in the real app need to be tested to, but that’s what integration testing is for.

Conclusion

Tests shouldn’t be something that causes you to groan in frustration every time you have to make a change. I find that using an IoC container in my unit tests removes at least one of those “groan” points, the constructor, and feels much cleaner.

0 Comments

I’m going to show my ignorance here for a second. Don’t get me wrong, its always there, I just don’t always show it.

I didn’t understanding what assembly binding redirection did until yesterday. I mean, I always saw app.config files being added/modified whenever I added a new package through NuGet, and it piqued my interest, but I never really had a reason to investigate further. It didn’t seem to be hurting (apart from some increased complexity in the config file) so I left it to do whatever the hell it was doing.

Yesterday (and the day before) I went through a solution with 58 projects with the intent of updating all of the Ninject references to the latest version. I’d added a reference to the Ninject.MockingKernel.Moq package to a test project

I know what you might be thinking, 58 projects! Yes I know that’s a lot. Its a bigger problem than I can solve right now though. Its on my list.

I know the second thing you might be thinking too. How many references to Ninject were there to update? A well factored application would only have 1 reference, in the entry point (or Composition Root). This particular solution had 4 entry points (3 web applications and a service), but references to Ninject were riddled throughout the rest of the projects as well, for a number of reasons:

  • Usage of the Service Locator anti-pattern, which exposed a static IKernel to, well, anything and everything.
  • Usage of Ninject attributes (property injection, marking which constructor to use for the kernel, etc).
  • Some projects had NinjectModules that mapped the interfaces in those projects to the concrete implementation (also in those projects).

IoC (and in particular Ninject) are amazing for a variety of reasons I won’t go into here, but its very easy to do it poorly.

Normally this would be a fairly painless process, just go into the NuGet package manager, check the Updates section, select Ninject and hit update. The problem was, not all of the references to Ninject had been added through NuGet. Some were hard references to a version of Ninject that had been downloaded and placed in a lib folder. Other references to Ninject had been added automatically as a result of adding NuGet packages with Ninject dependencies (like Agatha).

Of course, I didn’t realise that some of the references had been manually added, so I naively just used NuGet to update all the references it did know about, compiled, and then did a quick smoke test on the web applications.

Chaos.

Specifically this:

Could not load file or assembly ‘Ninject, Version=3.0.0.0, Culture=neutral, PublicKeyToken=c7192dc5380945e7’ or one of its dependencies. The located assembly’s manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040).

Fair enough, that seems straightforward. Something is trying to load the old version and its going pear-shaped because the only Ninject.dll in the output directory is version 3.2.0.0. I went through the projects with a fine tooth comb, discovered that not all of the references had been updated to the latest version (and that some weren’t even using NuGet), fixed all that and tried again.

Still the same error.

I was sure that I had caught all the references, so I search through all of the .csproj files for every reference to 3.0.0.0 and couldn’t find any.

If you’re familiar with binding redirect, you can probably guess the thing that I did that I left out.

When I did the upgrade/installation, I was very wary of unintended changes to other parts of the system. One of the things that happened as a result of installing/updating through NuGet was the editing or addition of many app.config files for the projects in the solution. Specifically, the addition of the following chunk of XML to every project using Ninject.

<dependentAssembly> 
    <assemblyIdentity 
        name="Ninject" 
        publicKeyToken="c7192dc53809457" 
        culture="neutral" /> 
    <bindingRedirect 
        oldVersion="0.0.0.0-3.2.0.0" 
        newVersion="3.2.0.0" /> 
</dependentAssembly>

Here’s where the ignorance I mentioned earlier shows up. I thought that since I wasn’t going to be using version 3.0.0.0 of Ninject anywhere, I could safely ignore those changes, so I removed them.

After an embarrassing amount of time spent yelling at my development environment and searching the internet (“Could not load X” errors are surprisingly common) I finally realised that it was my actions that caused my issue.

I was right. None of my assemblies were using Ninject 3.0.0.0. However, the Agatha.Ninject.dll assembly did. Having no control over that assembly, I couldn’t upgrade its reference. Of course, NuGet had already thought of this and helpfully solved the problem for me…..until I just ignored its suggested configuration.

A bindingRedirect entry in an app.config file forces all assembly bindings for that assembly to redirect to the version specified. This works not just for dll’s that you have control over (i.e. your entry point and your projects) but also every assembly that you load and their dependencies as well.

Restoring the bindingRedirects for the entry points (the 3 web applications and the service) fixed the issue. I left it out of the rest of the projects because it seems like the sort of thing you want to set only at the entry point (kind of like IoC container configuration).

So in summary, never assume something exists for no reason, assume you just don’t understand the reason yet.

0 Comments

My name is Todd Bowles and I'm a Software Developer in Brisbane Australia. I'm also an avid gardener, and I'm intent on turning my home/yard into an awesome food producing machine over the next god knows how many years.

This is where I am going to start saying things. Hopefully things that people will find useful and interesting, but also things that I just want to say. I'll try to keep things focused around software development and gardening (as per the title of the blog) but I can't make any promises.

Coding and compost. Its an interesting combination, and the two things that I spend most of my critical analysis skills on. Strangely enough I think there are some parallels between the two, and I hope to go into them in more detail in the coming weeks/months/posts.

Meanwhile, this is just a quick introduction, so lets leave it at that.

Alternate blog titles!

  • Programming and Permaculture
  • Preconditions and Potatoes
  • Semantics and Soil
  • Growing Software
  • The Code Garden