0 Comments

ClickOnce seems to be quite maligned on the internet, but I think its a nice simple publishing technology, as long as your application is small and doesn’t need to do anything fancy on installation. It offers automatic updating as well, which is very nice.

Anyway I was getting annoyed that the only way I could deploy a new version of this desktop WPF app was by going into Visual Studio, right-click the Project –> Properties –> Publish. Then I had to go in and do things with the version, and make sure the other settings were correct.

It was all just too many clicks and too much to remember.

Complicating matters is that the app has 3 different build configurations, development, staging and release. Switching to any one of those build configurations changed the ClickOnce publish settings and the API endpoint used by the application. However, sometimes Visual Studio would just forget to update some of the ClickOnce publish settings, which caused me to publish a development application to the staging URL a couple of times (and vice versa). You had to actually reload the project or restart Visual Studio in order to guarantee that it would deploy to the correct location with the correct configuration. Frustrating (and dangerous!).

A little bit more information about the changes that occur as a result of selecting a different build configuration.

The API endpoint is just an app setting, so it uses Slow Cheetah and config transforms.

The publish URL (and supporting ClickOnce publish information) is stored in the csproj file though, so it uses a customised targets file, like this:

<Project ToolsVersion="3.5" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
    <PropertyGroup Condition="'$(Configuration)' == 'development-release'">
        <PublishUrl>[MAPPED CLOUD DRIVE]\development\</PublishUrl>
        <InstallUrl>[PUBLICALLY ACCESSIBLE INSTALL URL]/development/</InstallUrl>
    </PropertyGroup>
    <PropertyGroup Condition="'$(Configuration)' == 'staging-release'">
        <PublishUrl>[MAPPED CLOUD DRIVE]\staging\</PublishUrl>
        <InstallUrl>[PUBLICALLY ACCESSIBLE INSTALL URL]/staging/</InstallUrl>
    </PropertyGroup>
    <PropertyGroup Condition="'$(Configuration)' == 'production-release'">
        <PublishUrl>[MAPPED CLOUD DRIVE]\production\</PublishUrl>
        <InstallUrl>[PUBLICALLY ACCESSIBLE INSTALL URL]/production/</InstallUrl>
    </PropertyGroup>
</Project>

This customised targets file was included in the csproj file like this:

<Import Project="$(ProjectDir)\Customized.targets" />

So, build script time! Mostly to make doing a deployment easier, but hopefully also to deal with that issue of selecting a build configuration and not having the proper settings applied.

Like everything involving software, many Yaks were shaved as part of automating this deployment.

Automation

My plan was to create 3 scripts, one to deploy to each environment. Those 3 scripts should use a single script as a base, so I don’t create a maintenance nightmare. This script would have to be parameterised around the target configuration. Sounds simple enough.

Lets look at the finished product first, then we’ll go into each section in detail.

@ECHO OFF

SET publish_type=%1
SET install_url=[PUBLICALLY ACCESSIBLE INSTALL URL]/%publish_type%/
SET configuration=%publish_type%-release
SET remote_publish_destination=[MAPPED CLOUD DRIVE]\%publish_type%\

SET timestamp_file=publishDate.tmp
tools\date.exe +%%Y%%m%%d%%H%%M%%S > %timestamp_file%
SET /p timestamp_directory= < %timestamp_file%
DEL %timestamp_file%

REM %~dp0 is the directory containing the batch file, which is the Solution Directory.
SET publish_output=%~dp0publish\%publish_type%\%timestamp_directory%\
SET msbuild_output=%publish_output%msbuild\

tools\NuGet.exe restore [SOLUTION FILE]

"C:\Program Files (x86)\MSBuild\12.0\bin\msbuild.exe" [SOLUTION FILE] /t:clean,rebuild,publish /p:Configuration=%configuration%;PublishDir=%msbuild_output%;InstallUrl=%install_url%;IsWebBootstrapper=true;InstallFrom=Web

IF ERRORLEVEL 1 (
    ECHO Build Failure. Publish terminated.
    EXIT /B 1
)

REM Add a small (1 second) delay here because sometimes the robocopy fails to delete the files its moving because they are in use (probably by MSBUILD).
TIMEOUT /T 1

robocopy %msbuild_output% %remote_publish_destination% /is /E

Not too bad. It fits on one screen, which is a nice. The script itself doesn’t actually do all that much, just leverages MSBUILD and the build configurations that were already present in the solution.

The script lives inside the root directory of my solution, and it does everything relative to the directory that it’s in (so you can freely move it around). Scripts with hardcoded directories are such a pain to use, and there’s not even a good reason to do it. Its just as easy to do a relative script.

Alas, its not perfectly self contained. It is reliant on Visual Studio 2013 being installed, and a couple of other things (which I will mention later). Ah well.

Variables

First up, the script sets some variables that it needs to work with. The only parameter supplied to the script is the publish or deployment type (for me that’s development, staging or release). It then uses this value to select the appropriate build configuration (because they are named similarly) and the final publish location (which is a publically accessible URL).

Working Directory

Secondly, the script creates a working directory using the type of publish being performed and the current date and time. I’ve used the “date” command-line tool for this, which I extracted (stole) from a set of Unix tools that were ported to Windows. Its completely self contained (no dependencies, yeah!) so I’ve just included it in the tools directory of my repository. If you’re wondering why it creates a file to put the timestamp into, this was because I had some issues with the timestamp not evaluating correctly when I just put it directly inside a variable. The SET /P line allows you to set a variable using input from the user, and the input that it is supplied with is the contents of the file (using the < operator). More tricksy than I would like, but it gets the job done.

My other option was to write a simple C# command line application to get the formatted timestamp for the directory name myself, but this was a good exercise in exploring batch files. I suppose I could have also used ScriptCS to just do it inline (or Powershell) but that would have taken even more time to learn (I don’t have a lot of Powershell experience). This was the simplest and quickest solution in the end.

NuGet Package Restore

Third, the script restores the packages that the solution uses via NuGet. There are a couple of ways that you can do this inside the actual csproj files in the solution (using the MSBUILD targets and tasks), but I find that it’s just easier to call NuGet.exe directly from the command line, especially if you’re already in build script land. Much more obvious about what’s going on.

Build and Publish

Fourth, we finally get to the meat of the build script, where it farms out the rebuild and publish to MSBUILD. You can set basically any property from the command line when doing a build through MSBUILD, and in this case it sets the selected build configuration, a local directory to dump the output to and some properties related to the eventual location of the deployed application.

The reason it sets the IsWebBootstrapper and InstallFrom properties on the command line is because I’ve specifically set the ClickOnce deployment property values in the project file to be non-functional. This is to prevent people from publishing without using the script, which as mentioned previously, can actually be a risky proposition due to the build configurations.

The build and publish is more complicated than it appears though, and the reason for that is versioning.

Versioning

Applications deployed through ClickOnce have 2 version related attributes.

The first is the ApplicationVersion, and the second is MinimumRequiredVersion.

ApplicationVersion is the actual version of the application that you are deploying. Strangely enough, this is NOT the same as the version defined in the AssemblyInfo file of the project. This means that you can publish Version 1.0.0.0 of a ClickOnce application and have the actual deployed exe not be that version. In fact, that’s the easiest path to take. It takes significantly more effort to synchronize the two.

I don’t like that.

Having multiple identifiers for the same piece of software is a nightmare waiting to happen. Especially considering that when you actually try to reference the version from inside some piece of C#, you can either use the normal way (checking the version of the executing assembly) or you can check the ClickOnce deployment version.

Anyway, MinimumRequiredVersion is for forcing users of the ClickOnce application to update to a specific version. In this case, the product owner required that the user always be using the latest version (which I agree with), so MinimumRequiredVersion needed to be synchronized with ApplicationVersion.

ClickOnce seems to assume that someone will be manually setting the ApplicationVersion (and maybe also the MinimumRequiredVersion) before a deployment occurs, and isn’t very friendly to automation.

I ended up having to write a customised MSBUILD task. Its nothing fancy (and looking back at it, I’m pretty sure there are many, better, ways to do it, maybe even using the community build tasks) but it gets the job done. You can see the source code of the build task here.

It takes a path to an AssemblyInfo file, reads the AssemblyVersion attribute from it, sets the build and revision versions to appropriate values (build is set to YYDDD i.e. 14295, revision is set to a monotonically increasing number, which is reset to 0 on the first build of each day), writes the version back to the AssemblyInfo file and then outputs the generated version, so that it can be used in future build steps.

I use this custom task in a customised.targets file which is included in the project file for the application (in the same way as the old project customisations were included above).

This is what the targets file looks like.

<?xml version="1.0" encoding="utf-8"?>
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
    <PropertyGroup>
        <GeneratedAssemblyVersion></GeneratedAssemblyVersion>
    </PropertyGroup>
    <UsingTask TaskName="Solavirum.Build.MSBuild.Tasks.ReadUpdateSaveAssemblyInfoVersionTask" AssemblyFile="$(SolutionDir)\tools\Solavirum.Build.MSBuild.Tasks.dll" />
    <Target Name="UpdateVersion">
        <Message Text="Updating AssemblyVersion in AssemblyInfo." Importance="high" />
        <ReadUpdateSaveAssemblyInfoVersionTask AssemblyInfoSourcePath="$(ProjectDir)\Properties\AssemblyInfo.cs">
            <Output TaskParameter="GeneratedVersion" PropertyName="GeneratedAssemblyVersion" />
        </ReadUpdateSaveAssemblyInfoVersionTask>
        <Message Text="New AssemblyVersion is $(GeneratedAssemblyVersion)" Importance="high" />
        <Message Text="Updating ClickOnce ApplicationVersion and MinimumRequiredVersion using AssemblyVersion" Importance="high" />
        <CreateProperty Value="$(GeneratedAssemblyVersion)">
            <Output TaskParameter="Value" PropertyName="ApplicationVersion" />
        </CreateProperty>
        <CreateProperty Value="$(GeneratedAssemblyVersion)">
            <Output TaskParameter="Value" PropertyName="MinimumRequiredVersion" />
        </CreateProperty>
        <!-- 
        This particular property needs to be set because of reasons. Honestly I'm not sure why, but if you dont set it
        the MinimumRequiredVersion attribute does not appear correctly inside the deployment manifest, even after setting
        the apparently correct propery above.
        -->
        <CreateProperty Value="$(GeneratedAssemblyVersion)">
            <Output TaskParameter="Value" PropertyName="_DeploymentBuiltMinimumRequiredVersion" />
        </CreateProperty>
    </Target>
    <Target Name="BeforeBuild">
        <CallTarget Targets="UpdateVersion" />
    </Target>
</Project>

Its a little hard to read (XML) and it can be hard to understand if you’re unfamiliar with the way that MSBUILD deals with…things. It has a strange way of taking output from a task and doing something with it, and it took me a long time to wrap my head around it.

From top to bottom, you can see that it creates a new Property called GeneratedAssemblyVersion and then calls into the custom task (which is available in a DLL in the tools directory of the repository). The version returned from the custom task is then used to set the ApplicationVersion and MinimumRequiredVersion properties (and to log some statements about what its doing in the build output). Finally it configures the custom UpdateVersion target to be executed before a build.

Note that it was insufficient to just set the MinimumRequiredVersion, I also had to set the _DeploymentBuiltMinimumRequiredVersion. That took a while to figure out, and I still have no idea exactly why this step is necessary. If you don’t do it though, your MinimumRequiredVersion won’t work the way you expect it to.

Now that we’ve gone on a massive detour around versioning, its time to go back to the build script itself.

Deployment

The last step in the build script is to actually deploy the contents of the publish directory filled by MSBUILD to the remote URL.

This publish directory typically contains a .application file (which is the deployment manifest), a setup.exe and an ApplicationFiles directory containing another versioned directory with the actual EXE and supporting files inside. Its a nice clean structure, because you can deploy over the top of a previous version and it will keep that old versions files around and only update the .application file (which defines the latest version and some other stuff).

For this application, the deployment URL is an Amazon S3 storage account, parts of which are publically accessible. I use TNTDrive to map a folder in the S3 account to a local drive, which in my case is Y.

With the deployment location available just like a local drive, all I need to do is use robocopy to move the files over.

I’ve got some basic error checking before the copy (just to ensure that it doesn’t try to publish if the build fails) and I included a short delay before the copy because of some issues I was having with the files being locked even though MSBuild had returned.

Future Points of Improvement

I don’t like the fact that someone couldn’t just clone the repository that the build script is inside and run it. That makes me sad. You have to install the TNTDrive client and then map it to the same drive letter as the script expects. A point of improvement for me would be to use some sort of portable tool to copy into an Amazon S3 account. I’m sure the tool exists, I just haven’t had a chance to look for it yet. Ahh pragmatism, sometimes I hate you.

A second point of improvement would be to make the script run the unit and integration tests and not publish if they fail. I actually had a prototype of this working at one point, but it needed more work and I had to pull back from it in order to get the script completed in time.

Lastly, it would be nice if I had a build server for this particular application. Something like TeamCity or AppVeyor. I think this is a great idea, but for this particular application (which I only work on contractually, for a few hours each week) its not really something I have time to invest in. Yet.

Conclusion

ClickOnce is a great technology, but it was quite an adventure getting a script based deployment to work. Documentation in particular is pretty lacklustre, especially around what exactly some of the deployment properties mean (and their behaviour).

Still, the end result solved my initial problems. I no longer have to worry about accidentally publishing an application to the wrong place, and I can just enter one simple command from the command line to publish. I even managed to fix up some versioning issues I had with the whole thing along the way.

Victory!

0 Comments

I built a vegetable bed out of wood!

I’ve been wanting to do that for a while actually. The first two vegetable beds that I built were made of leftover bricks (no mortar). They worked well enough, but it was hard to build them as deep as I wanted, because once they got to a certain height they were very easily knocked over (as my dogs found out).

As per the title, I have come to the conclusion that I am no carpenter.

The Script

Simple rectangular bed. Two sleepers high, each sleeper bolted to a corner post. SketchUp is great.

The sleepers should be ACQ treated (CCA is possibly bad for vegetable gardens, as the chemicals can leech into the soil, and then into the vegetables and then into you). I went with hardwood because it was cheaper. I’m not entirely sure this was a good idea in the end, as drilling the holes for the bolts was a massive pain.

The Cast

  • 4 x ACQ Treated Hardwood Sleepers (1500 x 50 x 200)
  • 4 x ACQ Treated Hardwood Sleepers (1200 x 50 x 200)
  • 4 x ACQ Treated Poles (50 x 50 x 600)
  • 32 x M8 120mm Galvanized Bolts (and nuts) (M8 is 8mm diameter)

I ordered the timber from Lobb St Sawmill in Ipswich. I don’t remember exactly why I chose them, but all up it cost $218, which included $50 delivery. They cut it all to length for me, which saved a bunch of trouble. I actually got 6 of each of the sleepers because I was going to do a bed 3 sleepers high, but 2 was more than enough. Leftover timber for the next bed I suppose.

The Props

  • 1 x 30 Year Old Software Developer with no experience in carpentry
  • 2 x Clamps (or more, clamps are good and helpful
  • 1 x Electric Drill (cordless is best)
  • 1 x 9mm mm drill bit
  • 2 x Sawhorses
  • 1 x Adjustable Spanner
  • 1 x Non-adjustable Spanner of the correct nut size

I had this stuff already, but I don’t think I needed any other specific tools. I’m no expert of course, so there’s probably a tool that would have made the entire process 200% easier.

Act 1: Hopes are High, Progress is Made

Surely the easiest way to accomplish this would be to measure up where the holes need to go on all the pieces and drill them? Then I can just take the bits, slot them together and all will be well.

That’s where I started anyway.

I clamped a sleeper and post together, measured out where I wanted the bolts to go, marked the spots with some chalk and started drilling.

The drilling was pretty hard (hardwood, not just a name), but everything appeared to be going smoothly…

Act 2: Tragedy Strikes!

Its always a good idea to test your work in its intended configuration (integration test!) before you go too far. Its a great way of discovering problems early, while you’re still able to adapt easily.

Thank God for integration tests.

I’d made holes in a few of the sleepers (3?) and matching holes in one post. I moved those bits into position, in the configuration the bed would actually be put together in and…it didn’t work.

The holes didn’t line up enough for me to put the bolts through.

This confused the hell out of me for a few moments. I’d gone to the trouble of measuring and then marking out exactly where the holes should be on each of the pieces of timber. I drilled exactly where those marks were. When I put the whole thing together, it just didn't fit. Nothing was aligned, and in the cases where the holes were aligned enough to fit a bolt in, it wouldn’t go all the way through.

Turns out, I can’t drill straight holes.

Well, I can’t drill holes straight enough that they are interchangeable.

A good lesson to learn early that’s for sure.

Luckily for me, I hadn’t just drilled the holes individually into the pieces of timber. I’d actually clamped the sleepers and poles together, and drilled all the way through at once.

This meant that there was actually a configuration of the timber I’d already drilled holes in that worked. I just needed to find it.

After some trial and error, I managed to get the first corner of the bed together.

Act 3: Out of the Abyss, Into the Light

New plan!

Put the bed together in exactly the same configuration that it will be in in reality. Drill the holes like that.

This worked much better. No integration issues, because everything was already in the right place. I just had to clamp the timber (I only had two clamps, so I could only clamp the sleeper I was drilling), drill the necessary holes, pop a bolt through to hold it in place and then move on. I also raised the bed off the ground with some bricks, in order to be able to put the clamp underneath the lowest sleeper while drilling. Luckily the sleepers were very heavy and very thick, so they just kind of stayed in place on the sides that I hadn’t drilled/bolted. My dogs didn’t even knock one down, which was a miracle in itself.

Anyway, 3 hours later, I was done!

Act 4: The Hero’s Journey

Now to move the bastard. This time I actually thought ahead and marked the sleepers and posts with alpha-numeric labels so that I would know exactly how to put everything back together.

I took all of the bolts out, carefully put them aside where I wouldn’t lose them, pulled the whole thing apart and trucked it out into position.

The marks identifying each sleeper and its relationship with the appropriate post were super helpful here. Without them I would have certainly screwed up the position of the sleepers, and after my previous realization that none of the drilled holes were interchangeable, it would have been a hell of an adventure putting everything back together.

I had prepared the area where the bed would be going earlier, flattening it out and making sure that there was nothing in the way, so all I had to do was rebuild the bed in the new position.

It went fairly smoothly actually. The only thing worth mentioning was that the ground I’d prepared wasn’t quite flat, and the sleepers werevery flat, so I had to prop up one end with some extra soil in order for the whole thing to be level.

Act 5: Ultimate Showdown of Ultimate Destiny

Chicken Manure Pellets + Gypsum (I have really clay soil) on the bottom, followed by newspapers, followed by water, followed by a mixture of compost (my own), horse manure (not my own) and soil.

I only had enough stuff to fill it up to about 1/3 of the way to the top (and I originally wanted 3 sleepers high!), but that was enough to get it started.

Look at it!

Glorious.

Note that I left the posts sticking up above the actual sides of the bed. Like I said, I was originally going to go 3 sleepers deep (600mm) but decided against that when I realised how deep 2 sleepers was. I chose to leave the posts sticking out above the top for 2 reasons.

  1. I can hold onto the post if I need to lean into the garden. Support!
  2. I can probably use them to anchor additional garden things. More about this in a future blog post.

Bonus picture of the bed 6 weeks later, featuring a bunch of self-sown green stuff (I see some vines of some description, maybe melons, maybe something pumpkin-like, sunflowers, some tomatoes, there were some mushrooms as well).

Finale

Here’s a link to the Imgur album containing all of the photos.

I like gardening and building things. Being very physical, its different enough from software development that it provides a nice break for me, which helps to stop me from getting burnt out.

I treat my garden much the same way as I would any large software project. I’ve got a bunch of things in my backlog that I want to get accomplished, and each one is broken down to be somewhat independently achievable. Obviously, there are some things that have dependencies, but that’s not unusual (can’t exactly setup soak irrigation in a bed that doesn’t exist yet). Everything is prioritized based on a number of factors, including ease of completion, value, cost, etc. I regularly give thought to things in the backlog, contemplating how they might work and what I would need to do to accomplish them (backlog grooming). I even have planning (which generally happens at the start of a weekend), a demo (hey awesome fiancée, come see what I did) and retrospectives (typically at the end of the weekend).

When it comes time to actually pick something off the backlog and do it, I give it some thought, come up with a design, implement it, release it and see how it goes. Most of the time the tasks are small and cheap enough to just do them. If I mess up, I can always wipe out what I’ve done and start again.

Agile! Not just for software!

0 Comments

In my last blog post (Permaculture Paradise, the road so far…, an instant classic by the way), I mentioned that I found it to be quite an ordeal to create animations to show how my garden changed over time.

I’ve been taking weekly photos of various locations around my yard over the past year or so. They act as a form of metric, allowing me to compare two arbitrary points in time, and I use that ability to verify that I’m actually making things better. I use my phone to take the photos, which are then automatically synced to OneDrive (which is then in turn synced to my other computers). Handy.

Obviously I don’t have tripods or any other form of static photography setup in my yard (they’d just get in the way) so I have to manually do my best to align the photos to static points of reference, so that you can flip through them quickly to see change.

As an aside, I’ve thought before of making an app for my phone that would help me do the above. The simplest implementation I can think of would be to get the app to overlay the last image taken over the live feed of the camera screen (with a low opacity) so the user could align the static points in the last photo to now and make sure that the photo was taken from the same angle/position. I’m sure there’s also a whole bunch of complicated things you could do in order to programmatically match up static points of reference (like facial recognition algorithms maybe) but I feel that that sort of thing is a little beyond my grasp at this point in time. It would probably be really interesting though.

Anyway, back to the making animations thing. I thought that since I have a blog now, I’d be able to use those photos to show my legions of readers (ha) the changes happening in my garden. I assumed that stitching together a series of images into an animated gif would be easy. I mean, the internet is FULL of animated gifs, surely it can’t be that hard.

Turns out its pretty hard.

I should clarify that statement somewhat I suppose. Its actually really easy to stitch together a bunch of images into an animated gif. There are online services that do just that (makeagif is one for example, but there are others). The hard bit was the combination of my photos being quite large along with the fact that they didn’t necessarily transition cleanly from one picture to the next (what can I say, taking pictures in exactly the same position from week to week is hard).

The First Attempt

My first thought was to find a way to resize the images down to an appropriate size and then use one of the online services. Seems fair enough, any image editing program will let you do just that. I’m a lazy person however (to potential employers, remember, laziness is one of the great virtues of a programmer) so I didn’t want to have to go through the hundreds of photos I had and manually resize them. I was prepared to go through a significant amount of effort to avoid having to do the thing manually in fact.

Enter ImageMagick.

ImageMagick is a sweet little tool for doing command line manipulation of images, including resizing. It also makes gifs! Fantastic, two birds with one stone. To hell with using an online service, I’ll just use this tool to make my gifs myself. Much better.

I grabbed the installed version of ImageMagick, installed it on a Virtual Machine, stole the install directory (god I hate installing development software on my main machine sometimes, it always comes with a bunch of “helpful” stuff, like examples and tutorials and all sorts of other crap that’s better stored on the internet where I could actually search for it) and tossed it into my tools directory.

As a general rule of thumb, I’m very much in favour of portable applications when command line tools are concerned. I have a Tools directory in my OneDrive that contains cmder and a bunch of tools that I use from the command line (git, python). Whenever I find a new command line tool, I like to add it to my Tools/cmder/vendor directory and then customize the init.bat to add that directory (or whatever the appropriate bin directory is) to my PATH when running cmder.

As is sometimes the case with this sort of  “lets just steal the Program Files directory” approach, it didn’t work.

convert *.jpg composite.gif Invalid Parameter - composite.gif

Well, that’s a little weird. I mean, that’s the exact command that ImageMagick expects. A simple conversion of all jpgs in the directory to an animated gif.

It turns out that there is already a convert command defined in system32, and when I added ImageMagick to my path in cmder, I added it like this (which was consistent with all of the other path customisations).

@set PATH=%PATH%;%CMDER_ROOT%\vendor\ImageMagick-6.8.9-Q16-64-NP; 

That was wrong, because the ImageMagick convert command appeared later in the path than the built in Windows convert command

I have no idea what the built in convert command does, but I’m glad it was non-destructive when pointed at my image files. I mean I copied them to a working directory just in case I messed them up, but still, it could have done almost anything.

Anyway, once I fixed that by pre-pending the ImageMagick directory to my path (instead of appending it), I got a new error.

convert *.jpg composite.gif convert.exe RegistryKeyLookupFailed CoderModulesPath @ error/module.c/GetMagickModulePath/662 no devcode delegate for this image format JPEG @ error/constitute.c/ReadImage/501

There were a lot of errors like that.

My own fault really, I should know better than to assume an installed app will just work if you move it around. The installed version of ImageMagick is dependent on some registry entries, which seem to be where it defines the location for the modules for dealing with files of a particular format. Something like that anyway. Ugh, registry entries.

I went back to the ImageMagick site and grabbed the portable build instead and tried again.

convert *.jpg composite.gif convert.exe Memory allocation failed composite.gif @ error/quantize.c/QuantizeImage/2733 error/gif.c/WriteGIFImage/1642

Yessss, progress.

Nooooo, memory allocation failure.

I’ll save you the long rambling investigation here. I had plenty of memory available, but the portable build of ImageMagick is 32 bit and the images I was trying to convert into a gif were pushing the 1.9GB memory limit for a 32 bit process almost straight away. I tried resizing the files to be smaller, but any time I tried to make a gif with all 47 of the images I just kept running into the memory limit.

The Tangent

I’m a developer! I should be able to take the ImageMagick source and create a portable build targeted to a 64 bit environment. There’s even instructions on the ImageMagick website about making a build of your own, how hard can it be?

About a day later I gave up. Its C++ and old and Visual Studio 2013 just could not cope with it. While I’m sure I would have been able to get there in the end, the motivation just wasn’t there. Sometimes you just have to accept that maybe you shouldn’t be shaving that particular yak.

The Second Attempt

I installed the 64 bit version of ImageMagick and composed the gif from the raw photos.

It looked terrible.

My photos had enough difference between them that the resulting gif was jumping all over the place, and was just plain unpleasant to watch. I didn’t even look like it had included all of the images either. The first few images in the series were from my old phone as well, so they had completely different dimensions, which really messed everything up (and may have been the reason for it not including all of the images in the gif).

The Third Attempt

Good old Paint.NET. You have an plugin for dealing with animated gifs as well.

That Paint.NET website is a horrible horrible piece of work, with a ton of “download” links that are actually just advertisements. I’ve been using Paint.NET 3.5.10 for a while now, but it looks like the latest version is 4.0.3 on the website. Here is a direct download link for that version. Not sure how long it will be valid for.

I used Paint.NET to load all of the photos for a single location in the garden as layers (Layers –> Import from File). I then resized the canvas so that there was some breathing room around all of the images and hid all of the layers except for the first one. Finally I went through each layer, made it visible, set its opacity to approximately 50% and manually moved it into position. When moving the image, I picked a static point of reference (this time it was my compost bins, which was nice because they were in the centre) and aligned that point on the first layer to each additional layer. I did this to mitigate the jumping effect that I noticed after making the gif using ImageMagick. With a point of reference for the human eye to latch on to, the resulting gif appears much smoother than it actually is.

Finally, to deal with the fact that not all of my images were the same dimensions, I cropped everything that didn’t contain content in every layer (make selection, Image –> Crop to Selection).

Lastly, I saved as an animated gif.

Victory!

Well, almost. The resulting gif was pretty huge (100+MB), because I forgot to resize the photos down. At least with all the images in the same document the fix was simple though, Image –> Resize –> X%, repeat until I got the resulting gif down to a size I was more comfortable with (14MB).

Bonus Attempt!

I did question whether or not an animated gif was really the best way to do the change animation, so I tried Windows Movie Maker. While it did let me make a movie of the images stitched together fairly easily, I really couldn’t do anything more than that (and I really needed to reposition the images to make the resulting animation look acceptable). There were a few other video making tools that I considered as well (Lightworks was one) but they all had fairly steep learning curves, and I while I wanted to make a nice animation, I didn’t have an infinite amount of time available.

The Delivery

14MB is still too large for someone to download via a blog post though. Its fine for my own personal use, but that’s a hell of a lot of data to pull down just to look at how my garden changed over time. It would also mean that I would have to host that file as well, which would have a direct impact on my own bandwidth costs (I think anyway. I mean this blog is hosted on Azure. I should check that…)

After a bit of searching, I found gfycat. Its a nice little service that allows you to upload a gif and then either embed it or link to it from your website. The gif is actually converted to an MP4 (I think) and when embedded or linked, will be displayed as either a HTML5 video or directly as an animated gif. The whole service is powered by Amazon as well, so I don’t need to worry about content location or anything like that. It just works.

I won’t embed the image again (because its already in my Permaculture Paradise, the road so far… post) but its directly available at http://gfycat.com/EnchantedDearestIrukandjijellyfish if you want to have a look.

It ended up being 4MB in gfy format in the end, which is still incredibly large (sorry mobile users). I think I’ll need to considering making future animations at a lower resolution or reducing the quality in some way in order to drop it below 1MB at least. Its an ongoing process.

Gfycat has some instructions for embedding a gfy into your medium (website, blog post, whatever). It involves adding some script to your page, and then including an img tag with specific class and data-id attributes. Fairly straightforward.

The Javascript they ask you to include is:

(function(d, t) {
    var g = d.createElement(t),
        s = d.getElementsByTagName(t)[0];
    g.src = 'http://assets.gfycat.com/js/gfyajax-0.517d.js';
    s.parentNode.insertBefore(g, s);
}(document, 'script'));

A little bit strange actually, because all it does is insert the contents of the specified JavaScript file into your page when the JavaScript is run. Kind of pointless. Why not just use the src attribute of the script tag?

Anyway, you combine that piece of JavaScript with an img tag that looks like the following:

<img class="gfyitem" data-id="[KEY GOES HERE]" />

I tried to read through the contents of the JS file, but didn’t spend too much time on it. I assume what it does is find all img tags that have a class of gfyitem and then mutate them to be either the HTML5 video tag (if supported by the browser) or a link to the plain old gif, for whatever key you’ve referenced.

I’m using a customised version of MiniBlog, so I just added a script tag containing a reference to the JS file to the end of the _Layout.cshtml file of my chosen theme (OneColumn forever!), pushed to Azure (Git deployment is amazing) and the img tag that I’d inserted into my blog post was magically transformed into a gfy.

I like it when things just work.

I had to manually insert the image tag in my blog post using Windows Live Writer though, as it wasn’t an image link in the traditional sense, what with it not having a src and all. Its fairly easy to edit the source when using Windows Live Writer, but maybe a Plugin would be nice? I wonder how you write a Windows Live Writer plugin?

Conclusion

Making a nice looking gif from my photos was a lot harder and a lot more time consuming than I originally thought it would be.

At least some of that was caused by me, as I didn’t want to settle for a crappy looking gif, and I didn’t (originally) want to have to do the whole thing manually. My initial dream was to have the images smoothly morph from one to the other, but I quickly abandoned that and settled for just making a basic animation. That was probably for the best, as I still have no idea how I could accomplish the morphing. I did run across some techniques for doing it, but they looked complex enough for morphing just two images, let alone doing it for 47.

When it comes to manipulating images, I think you really do need to see what you’re doing in order to make something that looks good. I’m sure there’s a bunch of crazy people out there who could visualize all the images in their head and do the entire thing (including repositioning) through the command line, but I am not one of those people. In the end, I used the image editing software that I was most familiar with, and I thank God that someone made a plugin for exporting animated gifs from it.

I only ended up making a single animation (because of how time consuming the whole process was) but I will be creating more in the future. I’ve got at least 10 other locations in my garden that I can use, and I’m toying with applying the same change tracking process with my own body (I don’t think I. Hopefully I’ll get better at making the animations over time, because it took me at least an hour to put together the one I did make.

I hope that my long rambling journey through making a time-lapse animation will be of use to someone. I mean, it was worthwhile to me, because I learnt a lot about the whole process, and I hope I’ve managed to capture some of that experience in this post.

0 Comments

In my first post I mentioned that I would sometimes stray away from technical subjects, and instead focus on the adventure of creating a magical, food producing forest on my land. I haven’t blogged about it yet, because to be honest, I couldn’t figure out how to generate a nice time-lapse from the weekly photos that I’ve been taking for the past year or so. That particular journey (putting together a time-lapse) will be a blog post all by itself, and involved shaving many yaks. I’m still not particularly happy with the time-lapse that you will see later in this post, but at least its a start. Alas, sometimes pragmatism has to win over perfection if you just want to get something done.

This blog post will provide some background for future gardening posts, and acts as an introduction into the “compost” part of this blog.

The Dream

My dream is to have a thriving garden, almost every part of which produces something edible. Nut trees, fruit trees, vegetables, bushes, chickens, worms, everything you can think of. I don’t want to have to rely on the constant application of commercial fertilizers and pesticides, I’d like it all to be organic. Ideally, I’d also like it to be a mostly closed system, where I don’t even have to introduce materials from the outside in order to sustain it. I’d also like it to be pretty to look at (I like green things) and to maybe provide a bit of privacy for the house (the back yard is a little open, we’re on a hill).

Trying to find a way to accomplish the above dream led me to the concept of permaculture.

Permaculture is a branch of ecological design, ecological engineering, environmental design, construction and integrated water resources management that develops sustainable architecture, regenerative and self-maintained habitat and agricultural systems modelled from natural ecosystems.

Source: Wikipedia

The definition sounded like it fit, so yay! Its nice to have a name for something. It helps when talking about it (and searching for it).

As far as I can see, it comes down to realizing that the garden will be a complex system. consisting of many disparate parts that must interact along specific channels. Kind of like any software project…

The Location

I’ve got a fairly normal sized block of land in Forest Lake, QLD. Its a little over 600m2, and the house takes up a large chunk of it.

To assist with visualisation, my awesome fiancé used SketchUp to create a model of our house and the surrounding land. Here are some screenshots of the model:

 

Honestly, she’s pretty amazing when it comes to this sort of thing. She went and measured up the entirely of the house and land and then spent a few (wo)man days of effort modelling it all. We’ve used the model to discuss things, model new ideas and communicate them to builders and landscapers. Very very useful.

When we moved in in late 2010 the gardens were not that bad, but not that good either. We were mostly attracted to the house itself to be honest. It was the only place that my fiancé and I both walked into and said “This place feels right.”. If you are looking for a house to buy, that’s pretty much the best sign ever by the way. You should buy that house.

There were at least some garden beds around the perimeter, and the plants were doing okay. The lawn was mostly weeds, some Couch grass and that really annoying Crows Foot grass. The back and side yards were fairly heavily compacted as the previous owner had sold caravans (and was constantly moving them around). The soil consisted of mostly clay, rock and pain, was hydrophobic as well (not a lot of organic matter to help retain water). Rainstorms were particularly annoying, because the water would just run right off and into the lowest corner of the back yard, which made that corner squelchy and gross.

The model above actually includes some changes we’ve made since we moved in. There was a tank under the rear patio and no concrete slab down there (just dirt), and there was 2 shrubs near the western side of the fence in the back yard. About two years after we moved in we had underneath the patio sealed and the existing lawn torn up and turfed (with Sir Walter Buffalo).

You can see the way it used to look in this picture (with bonus dogs!).

The backyard, pre-concrete and lawn.

The concreting and lawn work made the back yard quite nice, but maintaining the lawn is an ongoing process. I’ll probably go over that in a future post.

Its not perfect, but its enough, and I think I can make it something awesome.

The Plan

I’ve got a lot of ideas. Some of them are insane and some are feasible, and for a while there, I couldn’t easily differentiate between the two. There was a time I considered turning an old water tank we had into a tank for aquaponics! I didn’t even have a normal vegetable bed yet! That would probably have been an expensive lesson in failure (or maybe an awesome lesson in how to be awesome, I’ll never know).

Luckily cooler heads prevailed (i.e. not mine, credit goes to the awesome fiancé again) and I started with smaller things, like compost, and vegetable beds.

The Professionals

While I love solving problems (its what I do!) its never a bad idea to involve a professional when you’re looking to do any sort of long term project like this. This is where I engaged the help of All You Can Eat Gardens, a local Brisbane business that helps people do exactly what I’ve been talking about. I got them to do a concept for the whole yard, taking into account my own ideas and applying their own professional experience. It turned out pretty good, and I found them to be very communicative throughout the entire process (although they get quite busy, so I had a fairly long wait before they could get to me).

This was their concept:

Concept

It pleases me greatly on some sort of engineer/graphic design level. I’m not entirely sure why. The concept also came with a document detailing a lot of things in the concept diagram, resources to follow and other useful tidbits. I think it was a worthwhile investment.

At some stage I’ll go back to them and get them to do a detailed design (in which they help you plan out in detail exactly what goes where and how to do it) but for now I’ll just follow through on the general stuff in the concept and keep learning at my own pace.

The Progress

So far (and with the help of my fiancé), I have:

  • 2 established vegetable beds, made with leftover bricks, both with a drip irrigation system.
  • 1 recently built timber bed, which won’t be ready (i.e. full of soil) until next season. Building this bed was a great learning experience, and I’ll do a blog post about it at some point in the future. 
  • 2 compost bins, which are awesome.
  • A mulch pit, which has helped me to improve various areas (yay organic matter!).
  • A thriving herb garden in a previously barren area under the patio.

Here’s a time-lapse (one photo/week, over 47 weeks) of the eastern side of the house. This is the location of the vegetable beds, the compost bins, the (short lived) worm farm, the mulch pit and a few other things.

That particular time-lapse is hosted on gfycat, and is available at http://gfycat.com/EnchantedDearestIrukandjijellyfish if you want to look at it directly. If you’re running a modern browser you should be able to pause, speed-up, slow-down, reverse and all other kinds of nifty things. If you’re running an old browser, sorry, its a fairly massive animated gif for you (get a better browser!).

I have photos that I can use to make similar time-lapses of other areas of the yard, but they take forever to stitch together into animated gifs that don’t look terrible, so they might come in future posts.

The eastern side is really a test area, where I can get a handle on things before I involve any more of the yard. It probably won’t be the permanent location of the vegetable beds at least (not enough sun), but its a great, isolated place to learn. The compost bins have been effective, I use them to dispose of all of the vegetable scraps from the kitchen. The compost I’ve harvested from them has been rich and black, and I’ve used it to enrich the perimeter beds and to improve the soil on the very barren western side of the house.

Not too bad.

The next steps are to continue building out vegetable beds (1 more, then replace the two brick ones with wood as well) until I have all 4 needed for crop rotation, and to keep building up the soil.

More to come!

0 Comments

Yeah, sorry about the title. They can’t all be winners.

Anyway, I tracked down an interesting bug a few weeks ago, and I thought that it might be worthwhile to discuss it here, so that I can see it later when the memory fades. Also it might help someone else, which is nice, I guess.

The Problem

There was a web application. Some performance/load testing was being completed (for the first time, after development has “completed”, because who needs an Agile approach, right?) and the results showed that there was an unreasonable amount of failures during login. Something in the realm of 10% of all login attempts under a medium load would fail.

The Cause

At the root, the bug involved this class:

public class ExecutionEngine
{
    public string ConnectionString { get; set; }
    public SqlCommand Command { get; set; }
    public DataSet Result { get; set; }

    public void Execute()
    {
        var conn = new SqlConnection(ConnectionString);
        Command.Connection = conn;
        var adapter = new SqlDataAdapter(Command);

        Result = new DataSet();
        adapter.Fill(Result);
    }
}

Pretty terrible all things considered. Weird way to use SqlCommands and DataSets, but okay.

A typical usage of the ExecutionEngine class was as follows:

public class Demonstration
{
    public Demonstration(ExecutionEngine engine)
    {
        _Engine = engine;
    }

    private readonly ExecutionEngine _Engine;

    public IEnumerable<string> GetTheThingsForAUser(string userId)
    {
        _Engine.Command.CommandText = "GetAListOfAllTheThings";
        _Engine.Command.CommandType = CommandType.StoredProcedure;

        _Engine.Command.Parameters.Clear();
        _Engine.Command.Parameters.AddWithValue("UserId", userId);

        _Engine.Execute();

        var allTheThings = new List<string>();
        foreach (DataRow thing in _Engine.Result.Tables[0].Rows)
        {
            allTheThings.Add((string)thing["Name"]);
        }

        return allTheThings;
    }
}

There were a LOT of usages like the demonstration class above (100+), jammed into one SUPER-class called “DataAccessLayer”. This “DAL” was a dependency of the business classes, which were used by the rest of the system. An instance of a business class would be instantiated as needed, which in turn would resolve its dependencies (using Ninject) and then be used to service the incoming request.

Given that I’ve already spoiled the ending by mentioning threading in the title of this post, you can probably guess that there was a threading problem. Well, there was.

The ExecutionEngine class is obviously not thread-safe. At any point in time if you have one instance of this class being used on multiple threads, you could conceivably get some very strange results. Best case would be errors. Worst case would be someone else’s data!To illustrate:

  1. Thread A enters GetTheThingsForAUser
  2. Thread A sets the command text and type to the appropriate values.
  3. Thread B enters GetTheThingsForAUser
  4. Thread A clears the existing parameters and adds its User Id
  5. Thread B clears the parameters and adds its User Id
  6. Thread A executes, grabs the result and returns it. Thread A just returned the values for a completely different user that it asked for, but has given no indicationof this!

At the very least, the developer who created the class had thought about thread-safety (or someone had thought about it later).

public class DataAccessLayerModule : NinjectModule
{
    public override void Load()
    {
        Bind<ExecutionEngine>().ToSelf().InThreadScope();
    }
}

For those of you unfamiliar with the Thread scope, it ensures that there is one instance of the class instantiated per thread, created at the time of dependency resolution. It adds thread affinity to classes that don’t otherwise have it, but ONLY during construction.

At least there will be only one instance of this created per thread, and a single thread isn’t going to be jumping between multiple method executions (probably, I’m not sure to be honest) so at least the lack of thread safety might not be an issue.

This was, of course, a red herring. The lack of thread-safety was EXACTLY the issue. It took an embarrassingly large amount of time for me to track down the root cause. I debugged, watched the business objects being instantiated and then watched the execution engine being injected into them…with the correct thread affinity. Only the latest version of this web application was experiencing the issue, so it had to have been a relatively recent change (although this particular project did have quite a long and…storied history).

The root issue turned out to be the following:

public class DataAccessLayerModule : NinjectModule
{
    public override void Load()
    {
        // Some bindings.

        Bind<ExecutionEngine>().ToSelf().InThreadScope();

        // Some more bindings.

        Bind<Helper>().To<DefaultHelper>().InSingletonScope();
    }
}

See that second binding there, the one that’s a Singleton? It had a dependency on the ExecutionEngine. This of course threw a gigantic spanner in the works, as an instance of the ExecutionEngine was no longer only being used on one thread at a time, leaving it wide open to concurrency issues (which is exactly what was happening).

If you’re unfamiliar with the Singleton scope, it basically means that only one instance of the class is going to be instantiated in the application. This instance will then be re-used every time that dependency is requested.

At some point someone had refactored (that’s good!) one of the business classes (which were quite monolithic) and had extracted some of the functionality into that Helper class. Luckily this particular helper was only related to login, which explained why the failures only occurred during login in the load tests, so the impact was isolated.

The Solution

All of the business classes in the application were Transient scoped. This helper class was essentially a business class, but had been marked as Singleton for some reason. The simplest solution was to make it match the scoping of the other business classes and mark it as Transient too. This reduced the number of Login failures during the medium load test to 0 (yay!) which was good enough for the business.

The Better Solution

Of course, the code is still terrible underneath, and more subtle failures could still be lurking (can we be sure that every single time the ExecutionEngine is used that its only being used on the thread that it was created? not without adding thread affinity into the class itself), but you don’t always get time to fix underlying issues. As per my previous post, Champion you Code, normally I would fight pretty hard to fix the root cause of the problem (that goddamn ExecutionEngine). This time though…well the code had already been sold to someone who was going to develop it themselves and I wasn’t going to be working for that organisation for much longer, so I took the pragmatic approach and left it as it was. Granted, its essentially a tripwire for some future developer, which makes me sad, but you can’t always fix all the problems.

If given the opportunity I would probably change the way the ExecutionEngine is used at least, so that it isn’t as vulnerable to concurrency issues. The easiest way to do this would be to make the Execute method take a Command and return a DataSet, removing all of the state (except the connection string) from the class. That way it doesn’t matter how many threads attempt to Execute at the same time, they will all be isolated from each other. Not a small change, considering how many usages of the class in its current form existed, and the risk that that much change would introduce.

Summary

Singletons are dangerous. Well, I should say, Singletons that involve state in some way (either themselves, or with dependencies that involve state) are dangerous. If you go to mark something as being in Singleton scope, step away from the keyboard, go for a walk and think about it some more. There’s probably another way to do it. When using dependency injection its not always immediately obvious the impacts of making something a Singleton, so you have to be extremelycareful.