We’re Finally Paying Attention, Part 3
Full disclosure, most of the Elastalert related work was actually done by a colleague of mine, I’m just writing about it because I thought it was interesting.
Continuing with the Elastalert theme, its time to talk configuration and the deployment thereof.
Last week I covered off exactly how we put together the infrastructure for the Elastalert stack. It wasn’t anything fancy (AMI through Packer, CloudFormation template deployed via Octopus), but there were some tricksy bits relating to Python conflicts between Elastalert and the built-in AWS EC2 initialization scripts.
With that out of the way, we get into the meatiest part of the process; how we manage the configuration of Elastalert, i.e. the alerts themselves.
The Best Laid Plans
When it comes to configuring Elastalert, there are basically only two things to worry about; the overall configuration and the rules and actions that make up the alerts.
The overall configuration covers things like where to find Elasticsearch, which Elasticsearch index to write results into, high level execution timings and so on. All that stuff is covered clearly in the documentation, and there aren’t really any surprises.
The rules are where it gets interesting. There are a wide variety of ways to trigger actions off the connected Elasticsearch cluster, and I provided an example in the initial blog post of this series. I’m not going to go into too much detail about the rules and their structure or capabilities because the documentation goes into that sort of thing at length. For the purposes of this post, the main thing to be aware of is that each rule is fully encapsulated within a file.
The nice thing about everything being inside files is that it makes deployment incredibly easy.
All you have to do is identify the locations where the files are expected to be and throw the new ones in, overwriting as appropriate. If you’re dealing with a set of files its usually smart to clean out the destination first (so deletions are handled correctly), but its still pretty straightforward.
When we started on the whole Elastalert journey, the original plan was for a simple file copy + service restart.
Then Docker came along.
No Plan Survives Contact With The Enemy
To be fair, even with Docker, the original plan was still valid.
All of the configuration was still file based, so deployment was still as simple as copying some files around.
Mostly.
Docker did complicate a few things though. Instead of Elastalert being installed, we had to run an Elastalert image inside a Docker container.
Supplying the configuration files to the Elastalert container isn’t hard. When starting the container you just map certain local directories to directories in the container and it all works pretty much as expected. As long as the files exist in a known place after deployment, you’re fine.
However, in order to “restart” Elastalert, you have to find and murder the container you started last time, and then start up a new one so it will capture the new configuration files and environment variables correctly.
This is all well and good, but even after doing that you only really know whether or not the container itself is running, not necessarily the Elastalert process inside the container. If your config is bad in some way, the Elastalert process won’t start, even though the container will quite happily keep chugging along. So you need something to detect if Elastalert itself is up inside the container.
Putting all of the above together, you get something like this:
echo -e "STEP: Stop and remove existing docker containers..." echo "Checking for any existing docker containers" RUNNING_CONTAINERS=$(docker ps -a -q) if [ -n "$RUNNING_CONTAINERS" ]; then echo "Found existing docker containers." echo "Stopping the following containers:" docker stop $(docker ps -a -q) echo "Removing the following containers:" docker rm $(docker ps -a -q) echo "All containers removed" else echo "No existing containers found" fi echo -e "...SUCCESS\n" echo -e "STEP: Run docker container..." ELASTALERT_CONFIG_FILE="/opt/config/elastalert.yaml" SUPERVISORD_CONFIG_FILE="/opt/config/supervisord.conf" echo "Elastalert config file: $ELASTALERT_CONFIG_FILE" echo "Supervisord config file: $SUPERVISORD_CONFIG_FILE" echo "ES HOST: $ES_HOST" echo "ES PORT: $ES_PORT" docker run -d \ -v $RUN_DIR/config:/opt/config \ -v $RUN_DIR/rules:/opt/rules \ -v $RUN_DIR/logs:/opt/logs \ -e "ELASTALERT_CONFIG=$ELASTALERT_CONFIG_FILE" \ -e "ELASTALERT_SUPERVISOR_CONF=$SUPERVISORD_CONFIG_FILE" \ -e "ELASTICSEARCH_HOST=$ES_HOST" \ -e "ELASTICSEARCH_PORT=$ES_PORT" \ -e "SET_CONTAINER_TIMEZONE=true" \ -e "CONTAINER_TIMEZONE=$TIMEZONE" \ --cap-add SYS_TIME \ --cap-add SYS_NICE $IMAGE_ID if [ $? != 0 ]; then echo "docker run command returned a non-zero exit code." echo -e "...FAILED\n" exit -1 fi CID=$(docker ps --latest --quiet) echo "Elastalert container with ID $CID is now running" echo -e "...SUCCESS\n" echo -e "STEP: Checking for Elastalert process inside container..." echo "Waiting 10 seconds for elastalert process" sleep 10 if docker top $CID | grep -q elastalert; then echo "Found running Elastalert process. Nice." else echo "Did not find elastalert running" echo "You can view logs for the container with: docker logs -f $CID" echo "You can shell into the container with: docker exec -it $CID sh" echo -e "...FAILURE\n" exit -1 fi echo -e "...SUCCESS\n"
But wait, there’s more!
Environmental Challenges
Our modus operandi is to have multiple copies of our environments (CI, Staging, Production) which form something of a pipeline for deployment purposes. I’ve gone through this sort of thing in the past, the most recent occurrence of which was when I wrote about rebuilding the ELK stack. Its a fairly common pattern, but it does raise some interesting challenges, especially around configuration.
For Elastalert specifically, each environment should have the same baseline behaviour (rules, timings, etc), but also different settings for things like where the Elasticsearch cluster is located, or which Hipchat room notifications go to.
When using Octopus Deploy, the normal way to accomplish this is to have variables defined in your Octopus Deploy project that are scoped to the environments being deployed to, and then leverage some of the built in substitution functionality to do replacements in whatever files need to be changed.
This works great at first, but has a few limitations:
- You now have two places to look when trying to track changes, which can become a bit of a pain. Its much nicer to be able to view all of the changes (barring sensitive credentials of course) in your source control tool of choice.
- You can’t easily develop and test the environment outside of Octopus, especially if your deployment is only valid after passing through a successful round of substitutions in Octopus Deploy.
Keeping those two things in mind, we now lean towards having all of our environment specific parameters and settings in configuration files in source control (barring sensitive variables, which require some additional malarkey), and then loading the appropriate file based on some high level flags that are set either by Octopus or in the local development environment.
For Elastalert specifically we settled into having a default configuration file (which is always loaded) and then environment specific overrides. Which environment the deployment is executing in is decided by the following snippet of code:
echo -e "STEP: Determining Environmnet..." if [ "$(type -t get_octopusvariable)" = function ]; then echo "get_octopusvariable function is defined => assuming we are running on Octopus" ENVIRONMENT=$(get_octopusvariable "Octopus.Environment.Name") elif [ -n "$ENVIRONMENT" ]; then echo "--environment command line option was used" else echo "Not running on Octopous and no --environment command line option used. Using 'Default'" ENVIRONMENT="Default" fi echo "ENVIRONMENT=$ENVIRONMENT" echo -e "...SUCCESS\n"
Once the selection of the environment is out of the way, the deployed files are mutated by executing a substitution routine written in Python which does most of the heavy lifting (replacing any tokens of the format @@KEY@@ in the appropriate files).
To Be Continued
I’ve covered the two biggest challenges in the deployment of our Elastalert configuration, but I’ve glossed over quite a few pieces of the process because covering the entire thing in this blog post would make it way too big.
The best way to really understand how it works is to have a look at the actual repository.
With both the environment and configuration explained, all that is really left to do is bring it all together, and explain some areas that I think could use improvement.