CommsCentral

My Technology Adventures

Next Page Previous Page

AWS Cloud Warrior Program!

Posted at: 2016-04-12 @ 13:54:09

Sometimes there are rewards for hard work and effort that isn't just personal stratification and a paycheck at the end of the month. One of those was being selected to become part of the AWS Cloud Warrior program.

At my employer we've been big fans of AWS technology for awhile. For Me, AWS makes so much sense. I've spent of lot of years in various aspects of infrastructure consulting and what I've observed is very very few companies doing infrastructure well.

Since I started in IT I've written code and interested in system operations, to me, they both go hand in hand as part of being a engineer. The separation that existed between infrastructure/IT and development made no sense at all. I'd see operational people doing mass mailbox movements almost by hand, or having to run through a list of users every month and manually remove them from AD. Automation and scripting was something that was rare.. WHY?

On the development side I'd see mission critical systems being deployed out live into production with no fault tolerance built in and running on a single old server that had no documentation and hundreds of manual "tweaks".

As soon as I started to have contact with AWS platform I was sold.. More than sold, I decided it was something I wanted to work on in the future, devops and cloud application development/design was where I wanted to be.

Luckily my employer had a very insightful CIO who was able to drive this with business owners. The end result, the business saw massive benefits and we've been able to expand the operation until AWS has become our platform of choice and we are now moving towards "all in on AWS". Meaning, that all our of hosting and internal corporate resources will be on the AWS platform.

Wind the clock forward a few years and I've now found myself as part of the AWS Cloud Warrior program.

So what is an AWS Cloud Warrior? They are hand selected by AWS as individuals who are recognised to have a very high level of knowledge and experience in the platform. They have demonstrated their ability to build innovative and creative solutions using AWS recommended practices.
Warriors are considered to be technical evangelists of the AWS platform.

At the moment there are around 20 AWS Cloud Warriors in Australia Pacific Network (APN).
I feel very privileged to be part of such a small group and will continue my efforts to remain worthy of such an honour.

AWS Warriors have a few get together's per year with topics that very. A lot of the information provided is under NDA so I won't be blogging about those. However some of it is just talking to other warriors about the problems they've had and how they've over come them. Like, how is everyone testing Lambda functions?

AWS is planning to increase awareness of the AWS Cloud Warriors and if you attend the Sydney AWS Summit your likely to see a banner with all the faces on it like they did last year. There maybe some opportunities to talk to a few Warriors as well.


Issues with NTLM when behind AWS Elastic Load Balancers - Cause and solution

Posted at: 2016-02-26 @ 23:50:38

Recently I was troubleshooting a issue, post deployment of Microsoft Dynamics (CRM) 2015 when put behind Amazon Web Servers (AWS) Elastic load balancers (ELBs), that caused me to do some investigation.

The application layout is shown below in a simplified diagram. The important part is two servers running CRM Frontend services behind an ELB.
CRM-Diagram-network-flow.png

The ELB was configured with HTTP listeners. When in this mode the AWS elb will actually act like a reverse proxy. Therefore the client HTTP request is terminated at the elb and then the elb will connect back to the HTTP server. Confused? This diagram should help
CRM-Reverse-proxy-tcp-flow.png

When visiting the site on an IE browser you'd be prompted for login credentials, on submitting username and password immediately another prompt would come up. After clicking ok a few times part of the CRM page will error with HTTP Error 401.1 - Unauthorized: Access is denied

----
The Cause
So to understand why this doesn't work we have to understand NTLM a little bit.

When a connection is made NTLM denies the connection and asks it to authentication. This is with a HTTP 401 Unauthorized , NTLMSSP_CHALLENGE
The client will then retry with a NTLMSSP_AUTH.

Once the authentication is completed that TCP connection is then authenticated. This authentication holds as long as the TCP session is the same, so same source address, same destination address, same source port and same destination port.

The packet capture shows a browser connecting into CRM directly. For the first TCP connection the browser starts, it gets challenged for auth, once that is complete all the requests on that same TCP connection are authorized.
The browser also opens up another TCP connection, it is again challenge for the connection and the browser will supply the username and password again as required for the new connection.
This-CRM-Working-server.png


If you think back to the ELB TCP connection diagram above, you can probably start to see why there might be a problem when using http liseners or any http reverse proxy.

The second capture screenshot below is via an ELB with http listeners enabled.

At the start, it all looks ok, new connection, auth prompt, browser supplies username/password just like last time all good.
Then for a TCP connection that is already authenticated the browser sends a NTLMSSP_NEGOTIATE... Wait wouldn't it only do that for a NEW TCP session?
This-crm-broken-server.png

The end users browser has actually switched source ports.. Ie established another new TCP connection, however the ELB is proxying that new connection back to the CRM server on the same TCP ports as the previous connection.

This is 100% supported for pure http but not NTLM. In NTLM it causes the existing connection to be invalidated and reauthenticated. The browser thinks it has two tcp connections, (both authenticated) running, when infact it only has 1.

The end result, you get prompted for username/password credentials again and again as the browser thinks the server isn't accepting the authentication information supplied as it gets an unauthorized response for a TCP NTLM connection that was already authenticated.

----
The solution
The solution is very simple. Disable http listeners on the ELB and use TCP listeners only!


What Fun! So for that how care, now you know NTLM doesn't work with ELB http listeners and why!
This will actually apply for any pure http load balancer that doesn't have native support for ntlm.

Caveman

----
Some handy references
https://msdn.microsoft.com/en-us/library/dd925287%28v=office.12%29.aspx
https://github.com/nodejitsu/node-http-proxy/issues/362
https://pubs.vmware.com/NSX-62/index.jsp?topic=%2Fcom.vmware.nsx.admin.doc%2FGUID-A781BD86-A40E-4B71-8634-5677CDD52664.html

https://s3.amazonaws.com/quickstart-reference/microsoft/sharepoint/latest/doc/Microsoft_SharePoint_2013_on_AWS.pdf -- This is document states NTLM isn't supported on ELB. At the time of writing I've got NTLM workloads behind ELBs in production working without issue using TCP listeners.




AWS CodeDeploy with GitHub auto-deployment Webhooks - The bits they don't tell you

Posted at: 2015-09-15 @ 16:47:54

So you've decided that AWS codedeploy looks like a good tool.

If your like me you, you're already having a major love affair with GitHub and the idea of being able to have a git commit that directly triggers an AWS codedeploy deployment just makes that even better.

If you just follow the AWS documentation directly, setup your webhooks in git hub test each one and then push a commit, but nothing happens!

You'll find that you actually need to do an additional step that isn't really well documented.

1. Go into the AWS Console and over to CodeDeploy


2. Select your application (any application will do) and expand a deployment group

Expand Deployment Group



3. Select Deploy New revision

Deploy new revision



4. Select My Application is stored in GitHub and select connect with GitHub.

Connect with GitHub




5. This will launch a new browser window that will ask you to login to GitHub. In the backend this setups up the required OAuth tokens to allow aws to talk to github.

This appears to be required even if you have given GitHub all the required AWS Security keys to allow CodeDeploy deployments.
If this specific step isn't done then GitHub auto-deploy and AWS CodeDeploy webhooks won't work correctly.

Caveman



AWS CloudFormation helper scripts on Ubuntu - aws-cfn-bootstrap

Posted at: 2015-07-01 @ 15:45:42

AWS CloudFormation provides a set of Python helper scripts that you can use to install software and start services on an Amazon EC2 instance that you create as part of your stack.
If you'd used CloudFormation at all you'll probably have used this already.

The services contained within the bundle are used to deploy software, files and also runs a service that monitors for updates to the stack and determines if any action is required.

If deploying an Amazon Linux AMI this will already be installed and you'd kick this off with a fairly standard UserData script pushed to the instance via cloudformation as per below.

"UserData": {
"Fn::Base64": {
"Fn::Join": ["", [
"#!/bin/bash -xe\n",
"yum update -y aws-cfn-bootstrap\n",

"/opt/aws/bin/cfn-init -v ",
" --stack ", {
"Ref": "AWS::StackName"
},
" --resource Server ",
" --configsets install_init ",
" --region ", {
"Ref": "AWS::Region"
}, "\n",

"/opt/aws/bin/cfn-signal -e $? ",
" --stack ", {
"Ref": "AWS::StackName"
},
" --resource Server ",
" --region ", {
"Ref": "AWS::Region"
}, "\n"
]]
}
}
You'll find example like this in all the AWS supplied cloudformation templates which makes life simple.

However what if your using Ubuntu you'll find that there is no such package by the name of aws-cfn-bootstrap.
So here is the UserData code that we need use instead for Ubuntu
"UserData": {
"Fn::Base64": {
"Fn::Join": ["", [
"#!/bin/bash -xe\n",
"apt-get update\n",
"apt-get -y install python-pip\n",
"pip install https://s3.amazonaws.com/cloudformation-examples/aws-cfn-bootstrap-latest.tar.gz\n",
"cp /usr/local/init/ubuntu/cfn-hup /etc/init.d/cfn-hup \n",
"chmod +x /etc/init.d/cfn-hup \n",
"update-rc.d cfn-hup defaults \n ",
"service cfn-hup start \n",


"cfn-init -v ",
" --stack ", {
"Ref": "AWS::StackName"
},
" --resource Server ",
" --configsets install_init ",
" --region ", {
"Ref": "AWS::Region"
}, "\n",

"cfn-signal -e $? ",
" --stack ", {
"Ref": "AWS::StackName"
},
" --resource Server ",
" --region ", {
"Ref": "AWS::Region"
}, "\n"
]]
}
}
Whats this all doing?

Firstly we do a standard apt-get update to make sure we get all the newest information for available packages.

Then we run apt-get -y install python-pip, which installs python-pip and says yes to the confirmation prompt. Python-pip is a package manager/installer for python
tools.
Next we copy over the cfn-hup service init.d file and update the permissions to allow execution. This effectively creates the linux service for the cfn-hup deamon.
We update all the runlevel configuration with update-rc.d to ensure our service will run correctly are all the various linux runlevels
Lastly we start the cfn-hup service.


There are otherways to do this which are shorter, however they have some catches.
For example instead of python-pip you can use easy_install and I've seen a few examples for that.
When you use the easy_install you don't seem get a full install or the init.d files in /usr/local/init/

This means you create the cfn-hup service which means stack updates won't update the instance on a stack update https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-hup.html.

Using the AWS sample template at https://s3-us-west-2.amazonaws.com/cloudformation-templates-us-west-2/VPC_AutoScaling_and_ElasticLoadBalancer.template

you can see they have two sections which specifically setup cfn-auto-relader and cfn-hup configurations. If you have these in your template, or want to use them. You need to install the aws-cfn-bootstrap service as was specified above! Otherwise you'll have problems!

"files" :{
"/etc/cfn/hooks.d/cfn-auto-reloader.conf" : {
"content": { "Fn::Join" : ["", [
"[cfn-auto-reloader-hook]\n",
"triggers=post.update\n",
"path=Resources.LaunchConfig.Metadata.AWS::CloudFormation::Init\n",
"action=/opt/aws/bin/cfn-init -v ",
" --stack ", { "Ref" : "AWS::StackName" },
" --resource LaunchConfig ",
" --region ", { "Ref" : "AWS::Region" }, "\n",
"runas=root\n"
]]}
}
},

"services" : {
"sysvinit" : {
"httpd" : { "enabled" : "true", "ensureRunning" : "true" },
"cfn-hup" : { "enabled" : "true", "ensureRunning" : "true",
"files" : ["/etc/cfn/cfn-hup.conf", "/etc/cfn/hooks.d/cfn-auto-reloader.conf"]}
}
}


I hope this helps someone else out who has the same issue. I found resources on the internet very limited and lacking detail for this item specifically. Hence the post!

Enjoy!


AWS Command Line Interface 101 getting started - Profiles and Using Multiple AWS accounts

Posted at: 2015-06-27 @ 22:31:37

So in the past year or so I've started working a lot more in the AWS space.. And i mean a lot.
To the point that I'd be unwilling to go back to old-school environments, but that's a story for another day.

AWS Command line interface (cli for short) is a set of command line tools for managing your AWS environment from the command line.

Now this is super handy and if your really into your DevOps and AWS you'll be using it lots.

With that in mind I thought I'd share my 101 to getting your aws cli setup so you can use it. Now I am a linux user so this is going to be from a linux perspective, it should also work on OS X. If your a windows user you'll have to download an .exe and skip Step 1 below.
See https://aws.amazon.com/cli/

Step 1 - Install
The install requires Python and Python pip. If using a apt based OS
apt-get install python-pip
should get you all you need.

Install the AWS cli with
pip install awscli

Step 2 - First run

lets start off with a simply command to list your instances. Something like
aws ec2 describe-instances
When you run this you'll get something ugly like:
Unable to construct an endpoint for ec2 in region None
nWHOAH! What happened?

This is because the aws cli doesn't know what region or what your credentials are. Now you can specific them with command line flags, but having to put that in every time or copy and paste it is going to be annoying after probably the 2nd command you ever have to run.

The answer. PROFILES!

Step 3 - Profiles

Profile setup is easy. The below shows the initial command and results

aws configure --profile production
AWS Access Key ID []: Type In MY AWS IAM Access Key ID
AWS Secret Access Key []: Type in MY AWS IAM Access Secret
Default region name []: Enter in your Default region (for AU enter ap-southeast-2)
Default output format []: Enter the format you want response displayed as. I recommend text, however json is also an option here

Now when you run a command add --profile production to it and you'll have all your settings returned.
You can test it with something like:
 aws --profile production describe-instances

If you have instances in AWS under that account they should start list out. If you have no instances at least you shouldn't get an error.

Step 4 - Multiple profiles
As your AWS environments get larger and if you have lots of different business groups using AWS you'll want separate accounts even if its to separate billing. plus you might have your own personal account.

AWS CLI profiles work great, just run through Step 3 again however instead of it being the production account, it might be your personal account. just call it personal, enter in the IAM details and vola! You can now access your personal AWS account from AWS cli on the same machine by just using --profiles personal.

If your anything like me, you'll end up with a lot of these! I've got about 6 now

Enjoy!




OpenVpn with easy-rsa3 - June 2015 - setup guide

Posted at: 2015-06-19 @ 22:28:54

Easy-rsa is wrapper application for openssl that makes creating a CA (certificate authority) and issuing certificates easy and somewhat user friendly.

Its most commonly used with OpenVPN and was designed to go with it.

The easy-rsa group have now released version 3 over a github https://github.com/OpenVPN/easy-rsa

While there are lots of guides for getting OpenVPN working with easy-rsa2 there aren't many (at time of writing) for easy-rsa3. The commands have completely changed and the old guides don't port over to the new version very well.

This will show how you use easy-rsa3 along with how to setup OpenVPN with a secure configuration including the use of perfect forward security and large keys.


NOTE: for those you want to jump directly into the commands you can find the script over at my github account https://github.com/adcreare/openvpn/blob/master/easy-rsa3/setup-easyrsa3-commands.sh


Step 1 - Download easy-rsa3 and setup configuration (vars file)

Download easy-rsa3, the best source is the github website https://github.com/OpenVPN/easy-rsa/tree/master/easyrsa3

2. Rename vars.example to vars

3. Most of the configurations in the file will have a # infront of them. This means the default setting will apply.
Locate the configuration value EASY_KEY_SIZE and push it out to a 4096 key size. While 2048 is probably enough right now.. It doens't hurt in my view to push past that and there have been some issues appearing with smaller DH keys
Once finished the line should look like the below (ensure you remove the # to make it live)

set_var EASYRSA_KEY_SIZE 4096


Step 2 - Build the CA keys
Technically this step can be done on another machine that isn't the VPN server and that is actually best pratice if your building a secure environment. That way if the VPN server does get compromised you don't lose control over the certificate authority that has issued all the keys.
For the purposes of this guide I'm going to leave the CA on the same server as the VPN.

Run the following two commands to build the CA up, the first setups up the PKI. The second builds the certificates for the CA and we've specified no password for these keys. Again a password could be set and you'd then be prompted for that when generating keys

Ensure you have already issued the cd command to change into the easy-rsa3 directory.


./easyrsa init-pki
./easyrsa build-ca nopass


Step 3 - Build the DH (Diffie-Hellman) key
These DH keys are what give us the perfect forward secretacy

./easyrsa gen-dh


Step 4 - Generate the OpenVPN server certificate/key

Issue the following commands to generate the server keys and certificates
Again we will use the nopass option, otherwise we'd need a password each time the openvpn server started and that would be annoying

# Generate the key:

./easyrsa gen-req server nopass

# Get the new CA to sign our key so clients will trust it:

./easyrsa sign-req server server

(Optional) Step 5 - Create static secret
Its recommended that you create a static secret key that both server and client have a copy of.
This helps ensure the encryption is as strong as possible.
However it is an optional step.

openvpn --genkey --secret ta.key


Step 6 - Create the client keys

Perform this step for each client you want to use the VPN server.
It is strongly recommended to use a different key for EACH client

#Make the request to generate the key
# we can put a password on this key but the user will need to type it each time. They also can't change it..
# I therefore select no password

./easyrsa gen-req client1-lappy nopass


#Get the certificate authority to sign the request

./easyrsa sign-req client client1-lappy


Step 7 - Server - Copy the keys - and put them in the right folders
Our final step is to copy the keys into the correct locations.

On the server we excute the following commands:

# Copy over the CA (certificate authority) key over to the openvpn folder

cp pki/ca.crt /etc/openvpn

# Copy over the server private key and certificate

cp pki/issued/server.crt /etc/openvpn/
cp pki/private/server.key /etc/openvpn/

#copy of the DH key

cp pki/dh.pem /etc/openvpn/

#copy over the secret additional key (if created in step 5)

cp ta.key /etc/openvpn/


Step 8 - Client - Keys to copy over to Clients
Copy the following files from the server you created the keys on (the openvpn server or CA server) and with your openvpn client configuration file.
In my openvpn client config, I reference all the keys using keys/filename

In this case you'd copy the files into the keys sub directory (create it if it doens't exist) and then reference them in the client config with the folder: ca keys/ca.crt or example.

Certificate from the CA: ca.crt
Client public key: client1-lappy.crt
Client private key: client1-lappy.key
(Optional) shared secret if you created on in step 5 ta.key


--------------------

Thats it! Assuming you have a working openvpn server/client config this will give you a fully working PKI environment for use with openssl.



Next Page Previous Page


© 2015 CommsCentral