I’ve recently been setting up devices that need to have ServerDensity monitoring automatically added to them when I start them up. Setting up the ServerDensity agent itself can be automated through shell commands, but you also need to set the API key in the config. I use a Python file to create a new device in ServerDensity and output the API/agent key for the newly created device.
In this post (hopefully the first of more), I hope to provide a gentle introduction to HBase (since I never had one myself!) This specific post is more about the specifics of HBase’s data structure, but I hope to do more posts introducing HBase programming using a combination of HBase on Amazon’s Elastic Map Reduce (I like to call it Amazon’s HAAS or HBase-as-a-Service) and Python using the HappyBase library (this offers a really easy interface to the HBase Thrift interface.)
Amazon’s Glacier is a fairly new AWS service for low-cost storage/backups. But there is little documentation on how to actually use it.
I wanted to be able to run geo-data calculations on Amazon Elastic Map Reduce using Hadoop streaming jobs – particularly in Python. While we cannot easily install required Python dependencies, this problem can be solved by using the cacheArchive feature of Hadoop.
A simple Python program to lookup the full country name (and other details) from country code. You can also look up timezones, capital, etc from this.