Alarming Connections
- Posted in:
- aws
- alerting
- monitoring
- elb
- logstash
- cloudwatch
Alerting on potential problems before they become real problems is something that we have been historically bad at. We’ve got a wealth of information available both through AWS and in our ELK Stack, but we’ve never really put together anything to use that information to notify us when something interesting happens.
In an effort to alleviate that, we’ve recently started using CloudWatch alarms to do some basic alerting. I’m not exactly sure why I shied away from them to begin with to be honest, as they are easy to setup and maintain. It might have been that the old way that we managed our environments didn’t lend itself to easy, non-destructive updates, making tweaking and creating alarms a relatively onerous process.
Regardless of my initial hesitation, when I was polishing the ELK stack environment recently I made sure to include some CloudWatch alarms to let us know if certain pieces fell over.
The alerting setup isn’t anything special:
- Dedicated SNS topics for each environment (i.e. CI, Staging, Production, Scratch)
- Each environment has two topics, one for alarms and one for notifications
- The entire team is signed up to emails from both Staging and Production, but CI/Scratch are optional
Email topic subscriptions are enough for us for now, but we have plans to use the topics to trigger messages in HipChat as well.
I started with some basic alarms, like unhealthy instances > 0 for Elasticsearch/Logstash. Both of those components expose HTTP endpoints that can easily be pinged, so if they stop responding to traffic for any decent amount of time, we want to be notified. Other than some tweaks to the Logstash health check (I tuned it too tightly initially, so it was going off whenever a HTTP request took longer than 5 seconds), these alarms have served us well in picking up things like Logstash/Elasticsearch crashing or not starting properly.
As far as Elasticsearch is concerned, this is pretty much enough.
With Logstash being Logstash though, more work needed to be done.
Transport Failure
As a result of the issues I had with HTTP outputs in Logstash, we’re still using the Logstash TCP input/output combo.
The upside of this is that it works.
The downside is that sometimes the TCP input on the Broker side seems to crash and get into an unrecoverable state.
That wouldn’t be so terrible if it took Logstash with it, but it doesn’t. Instead, Logstash continues to accept HTTP requests, and just fails all of the TCP stuff. I’m not actually sure if the log events received via HTTP during this time are still processed through the pipeline correctly, but all of the incoming TCP traffic is definitely black holed.
As a result of the HTTP endpoint continuing to return 200 OK, the alarms I setup for unhealthy instances completely fail to pick up this particular issue.
In fact, because of the nature of TCP traffic through an ELB, and the relatively poor quality of TCP metrics, it can be very difficult to tell whether or not its working at a glance. Can’t use requests or latency, because they have no bearing on TCP traffic, and certainly can’t use status codes (obviously). Maybe network traffic, but that doesn’t seem like the best idea due to natural traffic fluctuations.
The only metric that I could find was “Backend Connection Errors”. This metric appears to be a measure of how many low level connection errors occurred between the ELB and the underlying EC2 instances, and seems like a good fit. Even better, when the Logstash TCP input falls over, it is this metric that actually changes, as all of the TCP traffic being forwarded through the ELB fails miserably.
One simple alarm initialization through CloudFormation later, and we were safe and secure in the knowledge that the next time the TCP input fell over, we wouldn’t find out about it 2 days later.
"BrokerLoadBalancerBackendConnectionErrorsAlarm": { "Type" : "AWS::CloudWatch::Alarm", "Properties" : { "AlarmName" : { "Fn::Join": [ "", [ "elk-broker-", { "Ref": "OctopusEnvironment" }, "-backend-errors" ] ] }, "AlarmDescription" : "Alarm for when there is an increase in the backend connection errors on the Broker load balancer, typically indicating a problem with the Broker EC2 instances. Suggest restarting them", "AlarmActions" : [ { "Ref" : "AlarmsTopicARN" } ], "OKActions": [ { "Ref" : "AlarmsTopicARN" } ], "TreatMissingData": "notBreaching", "MetricName" : "BackendConnectionErrors", "Namespace" : "AWS/ELB", "Statistic" : "Maximum", "Period" : "60", "EvaluationPeriods" : "5", "Threshold" : "100", "ComparisonOperator" : "GreaterThanThreshold", "Dimensions" : [ { "Name" : "LoadBalancerName", "Value" : { "Ref" : "BrokerLoadBalancer" } } ] } }
Mistakes Were Made
Of course, a few weeks later the TCP input crashed again and we didn’t find out for 2 days.
But why?
It turns out that the only statistic worth a damn for Backend Connection Errors when alerting is SUM, and I created the alarm on the MAXIMUM, assuming that it would act like requests and other metrics (which give you the maximum number of X that occurred during the time period). Graphing the maximum backend connection errors during a time period where the TCP input was broken gives a flat line at y = 1, which is definitely not greater than the threshold of 100 that I entered.
I switched the alarm to SUM and as far as I can see the next time the TCP input goes down, we should get a notification.
But I’ve been fooled before.
Conclusion
I’ll be honest, even though I did make a mistake with this particular CloudWatch alarm, I’m glad that we started using them.
Easy to implement (via our CloudFormation templates) and relatively easy to use, they provide an element of alerting on our infrastructure that was sorely missing. I doubt we will go about making thousands of alarms for all of the various things we want to be alerted on (like increases in 500s and latency problems), but we’ll definitely include a few alarms in each stack we create to yell at us when something simple goes wrong (like unhealthy instances).
To bring it back to the start, I think one of the reasons we’ve hesitated to use the CloudWatch alarms was because we were waiting for the all singing all dancing alerting solution based off the wealth of information in our log stack, but I don’t think that is going to happen in a hurry.
Its been years already.