0 Comments

I should have posted about this last week, but I got so involved in talking about automating my functional tests that I forgot all about it, so my apologies, but this post is a little stale. I hope that someone still finds it useful though, even as an opinion piece.

Automating the functional tests isn’t going so well actually, as I’m stuck on actually executing TestExecute in the virtual environment. Its dependent on there being a desktop for it to interact with (no doubt for the purposes of automation, like mouse clicks and send keys and stuff) and I’m executing it remotely from a different machine on a machine that is not guaranteed to have any available interactive user sessions. Its very frustrating.

Developer, Developer, Developer was hosted at QUT on Saturday 6 December, and it was great. This post is primarily to give a shoutout to everyone that I saw speak on the day, as well as some of my thoughts about the content. Its not a detailed breakdown of everything that I learned, just some quick notes.

First up, the conference would not have been possible without the sponsors, which I will mention here because they are awesome. Octopus Deploy, DevExpress and CampaignMonitor were the main sponsors, with additional sponsorship from Readify, SSW, Telerik + more. The whole day only cost attendees $30 each, and considering we had 2 massive lecture theatres and 1 smaller room at QUT for the entire day, food and drinks and then prizes at the end, the sponsors cannot be thanked enough.

Octopussy

The first talk of the day (the keynote) was from Paul Stovell, the Founder of Octopus Deploy.

Paul talked a bit about the origins of Octopus Deploy through to where the company (and its flagship product) are now, and then a little bit about where they are heading.

I found it really interesting listening to Paul speak about his journey evolving Octopus Deploy into something people actually wanted to pay money for. Paul described how Octopus was developed, how the company grew from just him to 5+ people, their first office (where they are now) and a number of difficulties he had along the way as he himself evolved from a developer to a managing director. I’ve been reading Paul’s blog for a while now, so there wasn’t a huge amount of new information, but it was still useful to see how everything fit together and to hear Paul himself talk about it.

I don’t think I will ever develop something that I could turn into a business like that, but its nice to know that it is actually possible.

A big thank you to Paul for his presentation, and to Octopus Deploy for their sponsorship of the event.

Microservices

Mehdi Khalili presented the second talk that I attended, and it was about microservices. Everyone seems to be talking about microservices now (well maybe just the people I talk to), and to be honest, I’d almost certainly fail to describe them within the confines of this blurb, so I’ll just leave a link here to Martin Fowlers great article on them. Its a good read, if a little heavy.

Long story short, its a great idea but its super hard to do it right. Like everything.

Mehdi had some really good lessons to share from implementing the pattern in reality, including things like making sure your services are robust in the face of failure (using patterns like Circuit Breaker) and ensuring that you have a realistic means of tracking requests as they pass through multiple services.

Mehdi is pretty awesome and well prepared, so his slides are available here.

I really should have written this blog post sooner, because I can’t remember a lot of concrete points from Mehdi’s talk, apart from the fact that it was informative while not being ridiculously eye-opening (I had run across the concepts and lessons before either through other talks or blog posts). Still, well worth attending and a big thank you to Mehdi for taking the time to put something together and present it to the community.

Microservices 2, Electric Boogaloo

I like Damian Maclennan, he seems like the kind of guy who isn’t afraid to tell you when you’re shit, but also never hesitates to help out if you need it. I respect that attitude.

Damian followed Mehdi’s talk on microservices, with another talk on microservices. I’ve actually seen Damian (and Andrew Harcourt) talk about microservices before, at the Brisbane Azure User Group in October, so I contemplated not going to this talk (and instead going to see William Tulloch tell me why I shouldn’t say fuck in front of the client). In the end I decided to attend this one, and I was glad that I did.

Damian’s talk provided a good contrast to Mehdi’s, with a greater focus on a personal experience that he had implementing microservices. He talked about a fictional project that he had been a part of for a company called “Pizza Brothers” and did a great walkthrough of the state of the system at the point where he came onto the project to rescue it, and how it changed. He talked and how he (and the rest of the team) slowly migrated everything into a Service Bus/Event based microservice architecture, and how that dealt with the problems of the existing system and why.

He was clear to emphasise that the whole microservices pattern isn’t something that you implement in a weekend, and that if you have a monolithic system, its going to take a long time to change it for the better. Its not an easy knot to unpick and it takes a lot of effort and discipline to do right.

I think I appreciate these sorts of talks (almost like case studies) more than any other sort, as they give the context behind the guidelines and tips. I find that helps me to apply the lessons in the real world.

Another big thank you to Damian for taking the time to do this.

Eventing, Not Just for Services

Daniel Little was the first presentation after lunch. He spoke about decoupling your domain model from the underlying persistence, which is typically a database.

The concepts that Daniel presented were very interesting. He took the event based design sometimes used in microservices, and used that to disconnect the domain model from the underlying database. The usage of events allowed the domain to focus on actual domain logic, and let something else worry about the persistence, without having to deal with duplicate classes or dumbing everything down so that database could understand it.

I think this sort of pattern has a lot of value, as I often struggle with persistence concerns leaking into an implementation and muddying the waters of the domain. I hadn’t actually considered approaching the decoupling problem with this sort of solution, so the talk was very valuable to me.

Kudos to Daniel for his talk.

Cmon Guys, UX Is A Thing You Can Do

This one was a pair presentation from Juan Ojeda and Jim Pelletier from Kiandra IT. Juan is a typical user experience (UX) guy, but Jim is a developer who started doing more UX after working with Juan. Jim’s point of view was a bit different than the normal UX stuff you see, which was nice.

I think developers tend to gloss over the user experience in favour of interesting technical problems, and the attendance at this talk only reinforced that opinion. There weren’t many people present, which was a shame because I think the guys gave some interesting information about making sure that you always keep the end-user in mind whenever you develop software, and presented some great tools for accomplishing that.

User experience seems to be one of those things that developers are happy to relegate to a “UI guy”, which I find to be very un-agile, because it reduces the share responsibility of the team. Sure, there’s going to be people with expertise in the area, but we shouldn’t shy away from problems in that space, as they are just as interesting to solve as the technical ones. Even if they do involve people instead of bits and bytes.

Juan and Jim talked about some approaches to UX, including using actual users in your design (kind of like personas) and measuring the impact and usage of your applications. They briefly touched on some ways to include UX into Agile methodologies and basically just reinforced how I felt about user experience and where it fits in into the software development process.

Thanks to Juan and Jim for presenting.

Security is Terrifying

The second last talk was by far the most eye-opening. OJ Reeves did a great presentation on how we are all doomed because none of our computers are secure.

It made me never want to connect my computer to a network ever again. I might not even turn it on. Its just not safe.

Seriously, this was probably the most entertaining and generally awesome talk of the day. It helps that OJ himself exudes an excitement for this sort of stuff, and his glee at compromising a test laptop (and then the things accessible from that laptop) was a joy to behold.

OJ did a fantastic demo where he used an (at the time unpatched) exploit in Internet Explorer (I can’t remember the version sorry) and Cross Site Scripting (XSS) to gain administrative access over the computer. He hid his intrusion by attaching the code he was executing to the memory and execution space of explorer! I didn’t even know that was possible. He then used his access to do all sorts of things, like take photos with the webcam, copy the clipboard, keylog and more dangerously, pivot his access to other machines on the network of the compromised machine that were not originally accessible from outside of the network (no external surface).

I didn’t take anything away from the talk other than terror and that there exists a tool called Metasploit and Meterpreter which I should probably investigate one day. Security is one of those areas that I don’t most developers spend enough time thinking about, and yet its one with some fairly brutal downsides if you mess it up.

You’re awesome OJ. Please keep terrifying muppets.

So You Want to be a Consultant

Damien McLennan’s second talk for the day was about things that he has learnt while working at Readify as a consultant (of various levels of seniority), before he moved to Octopus Deploy to be the CTO there.

I had actually recently applied and was rejected for a position at Readify *shakes fist*, so it was interesting hearing about the kinds of gigs that he had dealt with, and the lessons he learnt.

To go into a little more detail about my Readify application, I made it to the last step in their interview process (which consists of Coding Test, Technical Interview, Culture Interview and then Interview with the Regional Manager) but they decided not to offer me the position. In the end I think they made the right decision, because I’m not sure if I’m cut out for the world of consulting at this point in my career, but I would have appreciated more feedback on the why, so that I could use it to improve further.

Damian had a number of lessons that he took away from his time consulting, which he presented in his typical humorous fashion. Similar to his talk on microservices earlier in the day, I found that the context around Damian’s lessons learnt was the most valuable part of the talk, although don’t get me wrong, the lessons themselves were great. It turns out that most of the problems you have to deal with as a consultant are not really technical problems (although there are plenty of those) and are instead people issues. An organisation might think they have a technical problem, but its more likely that they have something wrong with their people and the way they interact.

Again this is another case where I wish I had of taken actual notes instead of just enjoying the talk, because then I would be able to say something more meaningful here other than “You should have been there.”

I’ve already thanked Damian above, but I suppose he should get two for doing two talks. Thanks Damian!

Conclusion

DDD is a fantastic community run event that doesn’t try to push any agenda other than “you can be better”, which is awesome. Sure it has sponsors, but they aren’t in your face all the time, and the focus really is on the talks. I’ve done my best to summarise how I felt about the talks that I attended above, but obviously its no substitute for attending them. I’ve linked to the slides or videos where possible, but not a lot is available just yet. SSW was recording a number of the talks I went too, so you might even see my bald head in the audience when they publish them (they haven’t published them yet).

I highly recommend that you attend any and all future DDD’s until the point whereby it collapses under the weight of its own awesomeness into some sort of singularity. At that stage you won’t have a choice any longer, because its educational pull will be so strong.

Might as well accept your fate ahead of time.

0 Comments

In my last blog post, I mentioned the 3 classifications that I think tests fall into, Unit, Integration and Functional.

Of course, regardless of classification, all tests are only valuable if they are actually being executed. Its wonderful to say you have tests, but if you’re not running them all the time, and actually looking at the results, they are worthless. Worse than worthless if you think about it, because the presence of tests gives a false sense of security about your system.

Typically executing Unit Tests (and Integration Tests if they are using the same framework) is trivial, made vasty easier by having a build server. Its not that bad even if you don’t have a build server, because those sorts of tests can typically be run on a developers machine, without a huge amount of fanfare. The downside of not having a build server, is that the developers in question need to remember to run the tests. As creative people, following a checklist that includes “wait for tests to run” is sometimes not our strongest quality.

Note that I’m not saying developers should not be running tests on their own machines, because they definitely should be. I would usually limit this to Unit tests though, or very self-contained Integration tests. You need to be very careful about complicating the process of actually writing and committing code if you want to produce features and improvements in a reasonable amount of time. Its very helpful to encourage people to run the tests themselves regularly, but to also have a fallback position. Just in case.

Compared to running Unit and Integration tests, Functional tests are a different story. Regardless of your software, you’ll want to run your Functional tests in a controlled environment, and this usually involves spinning up virtual machines, installing software, configuring the software and so on. To get good test results, and to lower the risk that the results have been corrupted by previously test runs, you’ll want to use a clean environment each time you run the tests. Setting up and running the tests then becomes a time consuming and boring task, something that developers hate.

What happens when you give a developer a task to do that is time consuming and boring?

Automation happens.

Procedural Logic

Before you start doing anything, its helpful to have a high level overview of what you want to accomplish.

At a high level, the automated execution of functional tests needed to:

  • Set up a test environment.
    • Spin up a fresh virtual machine.
    • Install the software under test.
    • Configure software under test.
  • Execute the functional tests.
  • Report the results.

Fairly straightforward. As with everything related to software though, the devil is in the details.

For anyone who doesn’t want to listen to me blather, here is a link to a GitHub repository containing sanitized versions of the scripts. Note that the scripts were not complete at the time of this post, but will be completed later.

Now, on to the blather!

Automatic Weapons

In order to automate any of the above, I would need to select a scripting language.

It would need to be able to do just about anything (which is true of most scripting languages), but would also have to be able to allow me to remotely execute a script on a machine without having to log onto it or use the UI in any way.

I’ve been doing a lot of work with Powershell recently, mostly using it to automate build, package and publish processes. I’d hesitated to learn Powershell for a long time, because every time I encountered something that I thought would have been made easier by using Powershell, I realised I would have to spend a significant amount of time learning just the basics of Powershell before I could do anything useful. I finally bit the bullet and did just that, and its snowballed from there.

Powershell is the hammer and everything is a nail now.

Obviously being a well established scripting language and is installed on basically every modern version of Windows. Powerful by itself, it’s integration with the .NET framework allows a C# developer like me the power to fall back to the familiar .NET BCL for anything I can’t accomplish using just Powershell and its cmdlets. Finally, Powershell Remote Execution allows you to configure a machine and allow authenticated users to remotely execute scripts on it.

So, Powershell it was.

A little bit more about Powershell Remote Execution. It leverages the Windows Remoting Framework (WinRM), and once you’ve got all the bits and pieces setup on the target machine, is very easy to use.

A couple of things to be aware of with remote execution:

  1. By default the Windows Remoting Service is not enabled on some versions of Windows. Obviously this needs to be running.
  2. Powershell Remote Execution communicates over port 5985 (HTTP) and 5986 (HTTPS). Earlier versions used 80 and 443. These ports need to be configured in the Firewall on the machine in question.
  3. The user you are planning on using for the remote execution (and I highly suggest using a brand new user just for this purpose) needs to be a member of the [GROUP HERE] group.

Once you’ve sorted the things above, actually remotely executing a script can be accomplish using the Invoke-Command cmdlet, like so:

$pw = ConvertTo-SecureString '[REMOTE USER PASSWORD' -AsPlainText -Force
$cred = New-Object System.Management.Automation.PSCredential('[REMOTE USERNAME]', $pw)
$session = New-PSSession -ComputerName $ipaddress -Credential $cred 

write-host "Beginning remote execution on [$ipaddress]."

$testResult = Invoke-Command -Session $session -FilePath "$root\remote-download-files-and-run-functional-tests.ps1" -ArgumentList $awsKey, $awsSecret, $awsRegion, $awsBucket, $buildIdentifier

Notice that I don’t have to use a machine name at all. IP Addresses work fine in the ComputerName parameter. How do I know the IP address? That information is retrieved when starting the Amazon EC2 instance.

Environmental Concerns

In order to execute the functional tests, I wanted to be able to create a brand new, clean virtual machine without any human interaction. As I’ve stated previously, we primarily use Amazon EC2 for our virtualisation needs.

The creation of a virtual machine for functional testing would need to be done from another AWS EC2 instance, the one running the TeamCity build agent. The idea being that the build agent instance is responsible for building the software/installer, and would in turn farm out the execution of the functional tests to a completely different machine, to keep a good separation of concerns.

Amazon supplies two methods of interacting with AWS EC2 (Elastic Compute Cloud) via Powershell on a Windows machine.

The first is a set of cmdlets (Get-EC2Instance, New-EC2Instance, etc).

The second is the classes available in the .NET SDK for AWS.

The upside of running on an EC2 instance that was based off an Amazon supplied image is that both of those methods are already installed, so I didn’t have to mess around with any dependencies.

I ended up using a combination of both (cmdlets and .NET SDK objects) to get an instance up and running, mostly because the cmdlets didn’t expose all of the functionality that I needed.

There were 3 distinct parts to using Amazon EC2 for the test environment. Creation, Configuration and Waiting and Clean Up. All of these needed to be automated.

Creation

Obviously an instance needs to be created. The reason this part is split from the Configuration and Waiting is because I’m still not all that accomplished at error handling and returning values in Powershell. Originally I had creation and configuration/waiting in the same script, but if the call to New-EC2Instance returned successfully and then something else failed, I had a hard time returning the instance information in order to terminate it in the finally block of the wrapping script.

The full content of the creation script is available at create-new-ec2-instance.ps1. Its called from the main script (functional-tests.ps1).

Configuration and Waiting

Beyond the configuration done as part of creation, instances can be tagged to add additional information. Also, the script needs to wait on a number of important indicators to ensure that the instance is ready to be interacted with. It made sense to do these two things together for reasons.

The tags help to identify the instance (the name) and also mark the instance as being acceptable to be terminated as part of a scheduled cleanup script that runs over all of our EC2 instances in order to ensure we don’t run expensive instances longer than we expected to.

As for the waiting indicators, the first indicator is whether or not the instance is running. This is an easy one, as the state of the instance is very easy to get at. You can see the function below, but all it does is poll the instance every 5 seconds to check whether or not it has entered the desired state yet.

The second indicator is a bit harder to get at, but it actually much more important. EC2 instances can be configured with status checks, and one of those status checks is whether or not the instance is actually reachable. I’m honestly not sure if this is something that someone before me setup, or if it is standard on all EC2 instances, but its extremely useful.

Anyway, accessing this status check is a bit of a rabbit hole. You can see the function below, but it uses a similar approach to the running check. It polls some information about the instance every 5 seconds until it meets certain criteria. This is the one spot in the entire script that I had to use the .NET SDK classes, as I couldn’t find a way to get this information out of a cmdlet.

The full content of the configuration and wait script is available at tag-and-wait-for-ec2-instance.ps1, and is just called from the main script.

Clean Up

Since you don’t want to leave instances hanging around, burning money, the script needs to clean up after it was done.

Programmatically terminating an instance is quite easy, but I had a lot of issues around the robustness of the script itself, as I couldn’t quite grasp the correct path to ensure that a clean up was always run if an instance was successfully created. The solution to this was to split the creation and tag/wait into different scripts, to ensure that if creation finished it would always return identifying information about the instance for clean up.

Termination happens in the finally block of the main script (functional-tests.ps1).

Instant Machine

Of course all of the instance creation above is dependent on actually having an AMI (Amazon Machine Image) available that holds all of the baseline information about the instance to be created, as well as other things like VPC (Virtual Private Cloud, basically how the instance fits into a network) and security groups (for defining port accessibility). I’d already gone through this process last time I was playing with EC2 instances, so it was just a matter of identifying the various bits and pieces that needs to be done on the machine in order to make it work, while keeping it as clean as possible in order to get good test results.

I went through the image creation process a lot as I evolved the automation script. One thing I found to be useful was to create a change log for the machine in question (I used a page in Confluence) and to version any images made. This helped me to keep the whole process repeatable, as well as documenting the requirements of a machine able to perform the functional tests.

To Be Continued

I think that’s probably enough for now, so next time I’ll continue and explain about automating the installation of the software under test and then actually running the tests and reporting the results.

Until next time!

0 Comments

Ahhhh automated tests. I first encountered the concept of automated tests 6-7 years ago via a colleague experimenting with NUnit. I wasn’t overly impressed at first. After all, your code should just work, you shouldn’t need to prove it. Its safe to say I was a bad developer.

Luckily logic prevailed, and I soon came to accept the necessity of writing tests to improve the quality of a piece of software. Its like double-entry book keeping, the tests provide checks and balances for your code, giving you more than one indicator as to whether or not it is working as expected.

Notice that I didn’t say that they prove your code is doing what it is supposed to. In the end tests are still written by a development team, and the team can still misunderstand what is actually required. They aren’t some magical silver bullet that solves all of your problems, they are just another tool in the tool box, albeit a particularly useful one.

Be careful when writing your tests. Its very easily to write tests that actually end up making your code less able to respond to change. It can be very disheartening to go to change the signature of a constructor and to hit hundreds of compiler errors because someone helpfully wrote 349 tests that all use the constructor directly. I’ve written about this specific issue before, but in more general terms you need to be very careful about writing tests that hurt your codebase instead of helping it.

I’m going to assume that you are writing tests. If not, you’re probably doing it wrong. Unit tests are a good place to start for most developers, and I recommend The Art of Unit Testing by Roy Osherove.

I like to classify my tests into 3 categories. Unit, Integration and Functional.

Unit

Unit tests are isolationist, kind of like a paranoid survivalist. They don’t rely on anyone or anything, only themselves. They should be able to be run without instantiating any class but themselves, and should be very fast. They tend to exercise specific pieces of functionality, often at a very low level, although they can also encompass verifying business logic. This is less likely though, as business logic typically involves multiple classes working together to accomplish a higher level goal.

Unit tests are the lowest value tests for verifying that your piece of software works from an end-users point of view, purely because of their isolationist stance. Its easily plausible to have an entire suite of hundreds of unit tests passing and still have a completely broken application (its unlikely though).

Their true value comes from their speed and their specificity.

Typically I run my unit tests all the time, as part of a CI (Continuous Integration) environment, which is only possible if they run quickly, to tighten the feedback loop. Additionally, if a unit test fails, the failure should be specific enough that it is obvious why the failure occurred (and where it occurred).

I like to write my unit tests in the Visual Studio testing framework, augmented by FluentAssertions (to make assertions clearer), NSubstitute (for mocking purposes) and Ninject (to avoid creating a hard dependency on constructors, as previously described).

Integration

Integration tests involve multiple components working in tandem.

Typically I write integration tests to run at a level just below the User Interface and make them purely programmatic. They should walk through a typical user interaction, focusing on accomplishing some goal, and then checking that the goal was appropriately accomplished (i.e. changes were made or whatnot).

I prefer integration tests to not have external dependencies (like databases) but sometimes that isn’t possible (you don’t want to mock an entire API for example) so its best if they operate in a fashion that isn’t reliant on external state.

This means that if you’re talking to an API for example, you should be creating, modifying and deleting appropriate records for your tests within the tests themselves. The same can be said for a database, create the bits you want, clean up after yourself.

Integration tests are great for indicating whether or not multiple components are working together as expected, and for verifying that at whatever programmable level you have introduced the user can accomplish their desired goals.

Often integration tests like I have described above are incredibly difficult to write on a system that does not already have them. This is because you need to accommodate the necessary programmability layer into the system design for the tests. This layer has to exist because historically programmatically executing most UI layers has proven to be problematic at best (and impossible at worst).

The downside is that they are typically much, much slower than unit tests, especially if they are dependent on external resources. You wouldn’t want to run them as part of your CI, but you definitely want to run them regularly (at least nightly, but I like midday and midnight) and before every release candidate.

I like to write my Integration tests in the same testing framework as my unit tests, still using FluentAssertions and Ninject, with as little usage of NSubstitute as possible.

Functional

Functional tests are very much like integration tests but they habe one key difference, they execute on top of whatever layer the user typically interacts with. Whether that is some user interface framework (WinForms, WPF) or a programmatically accessible API (like ASP.NET Web API), the tests focus on automating normal user actions as the user would typically perform them, with the assistance of some automation framework.

I’ll be honest, I’ve had the least luck with implementing these sorts of tests, because the technologies that I’ve personally used the most (CodedUI) have proven to be extremely unreliable. Functional tests written on top of a public facing programmable layer (like an API) I’ve had a lot more luck with, unsurprisingly.

The worst outcome of a set of tests are regular, unpredictable failures that have no bearing on whether or not the application is actually working from the point of view of the user. Changing the names of things or just text displayed on the screen can lead to all sorts of failures in automated functional tests. You have to be very careful to use automation friendly meta information (like automation IDs) and to make sure that those pieces of information don’t change without good reason.

Finally, managing automated functional tests can be a chore, as they are often quite complicated. You need to manage this code (and it is code, so it needs to be treated like a first class citizen) as well, if not better than your actual application code. Probably better, because if you let it atrophy, it will very quickly become useless.

Regardless, functional tests can provide some amount of confidence that your application is actually working and can be used. Once implemented (and maintained) they are far more repeatable than someone performing a set of steps manually.

Don’t think that I think manual testers are not useful in a software development team. Quite the contrary. I think that they should be spending their time and applying their experience to more worthwhile problems, like exploratory testing as opposed to simply being robots following a script. That's why we have computers after all.

I have in the past used CodedUI to write functional tests for desktop applications, but I can’t recommend it. I’ve very recently started using TestComplete, and it seems to be quite good. I’ve heard good things about Selenium, but have never used it myself.

Naming

Your tests should be named clearly. The name should communicate the situation and the expected outcome.

For unit tests I like to use the following convention:

[CLASS_NAME]_[CLASS_COMPONENT]_[DESCRIPTION_OF_TEST]

An example of this would be:

DefaultConfigureUsersViewModel_RegisterUserCommand_WhenNewRegisteredUsernameIsEmptyCommandIsDisabled

I like to use the class name and class component so that you can easily see exactly where the test is. This is important when you are viewing test results in an environment that doesn't support grouping or sorting (like in the text output from your tests on a build server or in an email or something).

The description should be easily readable, and should confer to the reader an indication of the situation (When X) and the expected outcome.

For integration tests I tend to use the following convention:

I_[FEATURE]_[DESCRIPTION_OF_TEST]

An example of this would be:

I_UserManagement_EndUserCanEnterTheDetailsOfAUserOfTheSystemAndRegisterThemForUseInTheRestOfTheApplication

As I tend to write my integration tests using the same test framework as the unit tests, the prefix is handy to tell them apart at a glance.

Functional tests are very similar to integration tests, but as they tend to be written in a different framework the prefix isn't necessary. As long as they have a good, clear description.

There are other things you can do to classify tests, including using the [TestCategory] attribute (in MSTest at least), but I find good naming to be more useful than anything else.

Organisation

My experience is mostly relegated to C# and the .NET framework (with bits and pieces of other things), so when I speak of organisation, I’m talking primarily about solution/project structures in Visual Studio.

I like to break my tests into at least 3 different projects.

[COMPONENT].Tests
[COMPONENT].Tests.Unit
[COMPONENT].Tests.Integration

The root tests project is to contain any common test utilities or other helpers that are used by the other two projects, which should be self explanatory.

Functional tests tend to be written in a different frameowkr/IDE altogether, but if you’re using the same language/IDE, the naming convention to follow for the functional tests should be obvious.

Within the projects its important to name your test classes to match up with your actual classes, at least for unit tests. Each unit test class should be named the same as the actual class being tested, with a suffix of UnitTests. I like to do a similar thing with IntegrationTests, except the name of the class is replaced with the name of the feature (i.e. UserManagementIntegrationTests). I find that a lot of the time integration tests tend to

Tying it All Together

Testing of one of the most powerful tools in your arsenal, having a major impact on the quality of your code. And yet, I find that people don’t tend to give it a lot of thought.

The artefacts created for testing should be treated with the same amount of care and thoughtfulness as the code that is being tested. This includes things like having a clear understanding of the purpose and classification of a test, naming and structure/position.

I know that most of the above seems a little pedantic, but I think that having a clear convention to follow is important so that developers can focus their creative energies on the important things, like solving problems specific to your domain. If you know where to put something and approximately what it looks like, you reduce the cognitive load in writing tests, which in turn makes them easier to write.

I like it when things get easier.

0 Comments

As I mentioned in a previous post, I recently started a new position at Onthehouse.

Onthehouse uses Amazon EC2 for their cloud based virtualisation, including that of the build environment (TeamCity). Its common for a build environment to be largely ignored as long as it is still working, until the day it breaks and then it all goes to hell.

Luckily that is not what happened.

Instead, the team identified that the build environment needed some maintenance, specifically around one of the application specific Build Agents.

Its an ongoing process, but the reason for there being an application specific Build Agent is because the application has a number of arcane, installed, licenced third-party components. Its VB6, so its hard to manage those dependencies in a way that is mobile. Something to work on in the future, but not a priority for right now.

My first task at Onthehouse, was to ensure that changes made to the running Instance of the Build Agent had been appropriately merged into the base Image. As someone who had never before used the Amazon virtualisation platform (Amazon EC2) I was somewhat confused.

This post follows my journey through that confusion and out the other side into understanding and I hope it will be of use to someone else out there.

As an aside, I think that getting new developers to start with build services is a great way to familiarise them with the most important part of an application, how to build it. Another fantastic first step is to get them to fix bugs.

Virtually Awesome

As I mentioned previously, I’ve never used AWS (Amazon Web Services) before, other than uploading some files to an Amazon S3 account, let alone the virtualization platform (Amazon EC2).

My main experience with virtualisation comes from using Virtual Box on my own PC. Sure, I’ve used Azure to spin up machines and websites, and I’ve occasionally interacted with VMWare and Hyper-V, but Virtual Box is what I use every single day to build, create, maintain and execute sandbox environments for testing, exploration and all sorts of other things.

I find Virtual Box straightforward.

You have a virtual machine (which has settings, like CPU Cores, Memory, Disks, etc) and each machine has a set of snapshots.

Snapshots are a record of the state of the virtual machine and its settings at a point in time chosen by the user. I take Snapshots all the time, and I use them to easily roll back to important moments, like a specific version of an application, or before I wrecked everything by doing something stupid and destructive (it happens more than you think).

Thinking about this now, I’m not sure how Virtual Box and Snapshots interact with multiple disks. Snapshots seems to be primarily machine based, not disk based, encapsulating all of the things about the machine. I suppose that probably includes the disks. I guess I don’t tend to use multiple disks in the machines I’m snapshotting all the time, only using them rarely for specific tasks.

Images and Instances

Amazon EC2 (Elastic Compute) does not work the same way as Virtual Box.

I can see why its different to Virtual Box as they have entirely different purposes. Virtual Box is intended to facilitate virtualisation for the single user. EC2 is about using virtualisation to leverage the power of the cloud. Single users are a completely foreign concept. Its all about concurrency and scale.

In Amazon EC2 the core concept is an Image(or Amazon Machine Image, AMI). Images describe everything about a virtual machine, kind of like a Virtual Machine in Virtual Box. However, in order to actually use an Image, you must spin up an Instance of that Image.

At the point in time you spin up an Instance of an Image, they have diverged. The Instance typically contains a link back to its Image, but its not a hard link. The Instance and Image are distinctly separate and you can delete the Image (which if you are using an Amazon supplied Image, will happen regularly) without negatively impacting on the running instance.

Instances generally have Volumes, which I think are essentially virtual disks. Snapshots come into play here as well, but I don’t understand Volumes and Snapshots all that well at this point in time, so I’m going to conveniently gloss over them. Snapshots definitely don’t work like VirtualBox snapshots though, I know that much.

Instances can generally be rebooted, stopped, started and terminated.

Reboot, stop and start do what you expect.

Terminating an instance kills it forever. It also kills the Volume attached to the instance if you have that option selected. If you don’t have the Image that the Instance was created from, you’re screwed, its gone for good. Even if you do, you will have lost any change made to that Image since the Instance began running.

Build It Up

Back to the Build environment.

The application specific Build Agent had an Image, and an active Instance, as normal.

This Instance had been tweaked, updated and changed in various ways since the Image was made, so much so that no-one could remember exactly what had been done. Typically this wouldn’t be a major issue, as Instances don’t just up and disappear.

Except this Instance could, and had in the past.

The reason for its apparently ephemeral nature was because Amazon offers a spot pricing option for Instances. Spot pricing allows you to create a spot request and set your own price for an hour of compute time. As long as the spot price is below that price, your Instance will run. If the spot price goes above your price, your Instance dies. You can setup your spot price request to be reoccurring, such that the Instance will restart when the price goes down again, but you will have lost all information not on the baseline Image (an event like that is equivalent to terminating the instance and starting another one).

Obviously we needed to ensure that the baseline Image was completely able to run a build of the application in question, requiring the minimal amount of configuration on first start.

Thus began a week long adventure to take the current base Image, create an Instance from it, and get a build working, so we could be sure that if our Instance was terminated it would come back and we wouldn’t have to spend a week getting the build working again.

I won’t go into detail about the whole process, but it mostly involved lots of manual steps to find out what was thing was wrong this time, fixing it in as nice a way as time permitted and then trying again. It mostly involved waiting. Waiting for instances, waiting for builds, waiting for images. Not very interesting.

A Better Approach

Knowing what I know now (and how long the whole process would take), my approach would be slightly different.

Take a snapshot of the currently running Instance, spin up an Instance of it, change all of the appropriate unique settings to be invalid (Build Agent name mostly) and then take another Image. That’s your baseline.

Don’t get me wrong, it was a much better learning experience the first way, but it wasn’t exactly an excellent return on investment from the point of view of the organisation.

Ah well, hindsight.

A Better Architecture

The better architecture is to have TeamCity managed the lifetime of its Build Agents, which it is quite happy to do via Amazon EC2. TeamCity can then manage the instances as it sees fit, spinning them down during idle periods, and even starting more during periods of particularly high load (I’m looking at you, end of the iteration crunch time).

I think this is definitely the approach we will take in the future, but that’s a task for another day.

Conclusion

Honestly, the primary obstacle in this particular task was learning how Amazon handles virtualization, and wrapping my head around the differences between that and Virtual Box (which is where my mental model was coming from). After I got my head around that I was in mostly familiar territory, diagnosing build issues and determining the best fix that would maximise mobility in the future, while not requiring a massive amount of time.

From the point of view of me, a new starter, this exercise was incredibly valuable. It taught me an immense amount about the application, its dependencies, the way its built and all sorts of other, half-forgotten tidbits.

From the point of view of the business, I should have definitely realized that there was a quicker path to the end goal (make sure we can recover from a lost Build Agent instance) and taken that into consideration, rather than try to work my way through the arcane dependencies of the application. There’s always the risk that I missed something subtle as well, which will rear its ugly head next time we lose the Build Agent instance.

Which could happen.

At any moment.

(Cue Ominous Music)

0 Comments

So, I got a new job. Typically I reserve this blog for technical stuff/gardening, but I want to write this down so I can look back at it later.

Anyway, technically I’ve had this new job for a while now, but I only started this Monday (November 17). Had a nice 5 week break (big thanks to my previous company, who offered to pay out my notice period when I resigned), and while I did some programming and learning, I mostly just gardened, spent time with my family and played video games. Its important to take a break from time to time and recharge the batteries.

I already knew this going in, but this office is amazing.

Its weird how much of a difference this makes to me right off the bat. I mean, I understand how tons of tiny, daily annoyances can quickly ruin your motivation and productivity, but I didn’t expect to be so happy about there being monitors with adjustable arms. There are monitors on all of the walls that can be configured to show whatever helps the team (mostly burn downs and task boards). The lighting is great too, its no basement, and has a nice mix of natural and artificial light.

A nice workspace always tells me that the organisation actually cares somewhat about their workforce. It doesn’t have to be flashy and pointlessly expensive (that shows they are overcompensating for something), just well thought out and designed.

Maybe I’ve just worked at smaller companies or something, but this is completely new to me, probably why its making such an impression.

Why?

If you’re unhappy at your place of work you really should try to understand why and address the problem before you decide to leave. In my case, I had a number of frustrations that by themselves would not have been enough, but when combined were enough for me to decide that I needed something (and somewhere) new.

The last straw was when the project we were working on was cancelled due to circumstances out of our control, just when we were starting to whip it into shape and deliver regularly and reliably. Disappointingly, the company didn’t really seem to have any vision about what was happening next either. I wish them all the best moving forward, but it just wasn’t the place for me.

If you’re not hurting for money (which, if you’re smart with the money you make, you shouldn’t be) looking for a new job can be a fun experience.

When you remove that feeling of urgency (oh god, I have no money and no job, I don’t want to lose my house) you can pick and choose what you want, and you don’t stress so much about rejections. Its nice. That was the position I found myself in (I wasn’t going to quit until I had something better), but to be honest, almost anything looks better when you’re unhappy, so I don’t know how well I calibrated my expectations.

Where?

I’m now a Senior Developer at Onthehouse. They are an Australian company with offices in Melbourne, Sydney and Brisbane. The full name of the company is Onthehouse Holdings, and they were formed by a group that acquired a set of other companies with already established products and services in the Real Estate market, with the intention of combining all of those products and services in new and interesting ways.

As for the office, a picture is worth a thousand words, so here are many thousands of words.

I took those photos earlier this week, after most people had knocked off for the day. Being later in the day, you don’t quite get to the see all the wonderful natural light that the big windows provide, but it really is quite nice. The office is at the corner of Adelaide and Edward, so the windows actually have quite a nice view down onto the street as well.

As a bonus, I can get to the train station without ever actually going outside. Take that weather.

What?

The team that I’m in is relatively small (9 people, 4 developers, 1 tester, 1 business analyst/scrum master, 2 product owners and a delivery manager who manages other teams as well), but they are all working on a single product. It’s an installed application (with many satellite components) for use by Real Estate agents, to help them manage everything that they need to do. Its one of those monolithic applications. I’m pretty sure if a Real Estate agent/Property Manager needed to order a kitchen sink, there would be a button for that.

More importantly, its a successful application, and is already in use by a huge number of people in their daily routine.

Its nice to do product work again, especially with it actually being identified as product work and that made a priority, instead of just thinking “products” happen automatically. They don’t.

Critically important to me though, was that the people in power expressed enthusiasm for the product (and making it better) and had a vision about where he (and thus the company) wanted it to go. A cause to rally behind can really light a fire under your development team, especially when presented with a passion.

They are using the right methodology (Agile, Scrum specifically), the right tools (Jira, Confluence, TeamCity, Amazon Cloud Services) and they have the right attitude.

Everything can’t be all peaches and ice cream though. There is one downside.

The application is mostly VB6. From what I hear (I haven’t seen a lot of it yet) its not the best VB6 either.

VB6 is one of those languages that just will not die, because it was so easy to write. Its good because almost anyone can get in and get something accomplished. Its bad for exactly the same reason, and while those people might have had the best intentions, they didn’t necessarily have the skill and experience to do it well. Its important for organisations with large software assets written in VB6 to realise that the world has moved on since, and that they need to plan for its replacement.

Typically this would be a bit of a deal breaker for me. I want to move forward, not backwards. In this case my trepidation was overcome by the fact that they are slowly migrating away from the VB6, with the intent to eventually replace it altogether with something better. They’ve already made some inroads into this approach, moving components and features into .NET supported by services in the cloud (Amazon).

The fact that they have consciously decided against a rewrite (or they’ve already tried and failed a bunch of times) means they understand that rewrites never work, and are instead committed to evolution.

This is a much better approach.

Conclusion

I’m very much looking forward to working here at Onthehouse and overcoming the many challenges that the team will encounter. I think I have a real opportunity to do some good here, and to help delight some clients while also helping to move a great piece of software into a much better position.