If your anything like me, you probably manage a lot of AWS accounts. Every day there will be another service that has a need for its own AWS account, be it for isolation, security or billing purposes.
To make this process easier, I have a standard set of cloud formation templates that gets deployed onto any new account, so we can ensure the new accounts meet our compliance and security needs.
One of the services we deploy is AWS Config.
I wrote and deployed a cloudformation template to setup config, it all works. Great!
A day or so later I wanted to adjust the configuration so I deleted the stack and redeployed it. The redeploy fails with an error!
Failed to put configuration recorder 'config-testing-ConfigRecorder-VMRU0MAZTLEU' because the maximum number of configuration recorders: 1 is reached.
Well, turns out that there is something interesting about ConfigRecorder object: You can only have 1 config recorder per region and you can only create it ONCE when using cloudformation!
You can however adopt a config recorder that was previously created and reuse that.
I've no idea why this is the case, it feels silly and breaks the whole idea of cloudformation, when you delete the stack you should be able to redeploy that same stack without error and hence it should remove the ConfigRecorder.
This issue probably lies with Cloudformation team vs Config team as I see that the the AWS Config API does have a DeleteConfigurationRecorder API call.
For now, to fix this issue, what I have done is have a parameter in my template for ConfigRecorderName. If that field is filled out, then I will attempt to Adopt the ConfigRecorder or that name when the stack is deployed. If that field isn't filled in, then I will create one.
The key sections of the template are shown below:
First we collect the config recorder name, in cloudformation JSON the Parameter section looks like this
"Description": "(Optional) If we already have a config recorder - it can't be deleted - so enter its name here and we will adopt it",
Next we create a cloudformation Condition based on what we entered into the Parameter.
CreateWithName will be set to true if the value of the Parameter is NOT Equal to the default value of ""
Now we have done the parameters and set the condition its time to tie it all together in the resource section.
At the line titled Name:, We have an Fn::If statement that checks to see if the CreateWithName condition is true.
If that condition is true the template will then use the name that has been entered into the parameter field. If the condition is false it inserts the AWS::NoValue value, which causes cloudformation to ignore the line completely.
As per the cloudformation documentation if the Name field of AWS::Config::ConfigurationRecorder is not supplied cloudformation will allocate a randomly generated name and create the config recorder.
And there you have it! Its not the most elegant solution but its the best one I've been able to find so far.
Hopefully this is useful to someone else in the future and do hit me up on twitter/email if you find a better way!
Today I've got some news. No, I'm not pregnant, although that would be odd, interesting and confusing for a few reasons.
I have however moved Countries, that's right I now no longer live in Sydney Australia.
I now call New York city in the US home.
This also means that I've left my previous employer Reckon and now work for a US based company called FarePortal as an AWS Solution Architect.
So far, NYC has been interesting! It's very different to home but enjoyable. I'm certainly loving my 40-minute walk to the office and that we get a decent winter with snow and the whole works!
Apart from moving, I've been up to a few other exciting things that I will be blogging about separately, including reInvent, more software tools I've been building and my journey building up an AWS and agile development practice with a company who is new to it all!
Just for fun, I've included a few fun photos from NYC!
This tool was designed specifically for AWS OpenVPN Baston hosts.
AWS best practices tell us that when we deploy a VPC we should also deploy a Baston host for remote administration and management of instances inside our VPC.
Even if you have a direct connect or VPC level VPN to your corporate datacenter or office, it is still recommended to have a Baston host for your AWS environment for a few reasons.
The day you most need you access instances inside your VPC will be the day that the corporate datacenter is having problems.
Personally I also like to seperate my administration traffic from my application related traffic. Direct connects and VPC VPN connections in my view are for that, not for administration.
Having all administration traffic running through a central host also provides a degree of auditing and control of that traffic. After all, we don't really need our Baston that much, because we shouldn't be making non-scripted manual changes to our instances, right? :)
One of the commonly suggested options, is using SSH and port forwarding. Anyone who has ever used that is sure to know why that is painful and not scalable.
My preference is to use the opensource OpenVPN TLS/SSL vpn product to provide this access.
In addition to the standard private and public keypairs required I also enforce usernames and passwords. I find users tend to expect this (even if they don't need it) and it also makes user management by the operations team simpler.
By default usernames and passwords are stored in standard linux password file, namely /etc/password for user details and /etc/shadow for the encrypted passwords and openvpn will ask the system on logon if those credentials match the ones supplied.
The trouble with this, is that managing these accounts is a pain and they are tied to the vpn server instance. This can cause issues, if that instance fails, terminates or if I want highly available Baston hosts in an autoscaling group combined with elastic load balancer. For that I need a shared credential store and hence this module was born.
OpenVPN will be configured to call this module on logon, which will check the supplied credentials against a dynamoDB table. The module supports the same format as in /etc/shadow for simple migration from an existing credential store.
I only support the newer ID 6 format for encrypted passwords used by glibc 2.7 and above, which is a sha-512 hash combined with a random salt as per crypt man page
Check it out and feel free to fork and pull and changes!
The debate is going to be AWS vs Azure vs Google Cloud vs EMC and there are representatives for the various platforms.
Of course I will be representing the AWS side along with David Cheal, CTO, Strut Digital.
At the recent AWS Sydney summit, I was given the honour of being asked by to co-present a 101 level session titled "Introducing Well-Architected For Developers".
The talented Ben Potter, a security specialist from the AWS Professional services team was my co-presenter.