Technovangelist


{
  "name": "Matt Williams",
  "roles": [
    {
      "title": "Evangelist",
      "org": "Datadog"
    },
    {
      "title": "Organizer",
      "org": "Boston DevOps Days"
    }
  ]
}
January 15, 2018

Creating Docker Images with AWS CodeBuild

At AWS re:INVENT 2017, I spoke about Lambda and Step Functions (No, I don't know why it got renamed at the last minute to best practices of lambda workloadd either). My demo used those AWS features and many others to build a live blog of my session. One of the key steps was to build a Docker image that is used to build my static website using gatsbyjs. Today, I thought I would break down the process for building this Docker image builder.

AWS CodeBuild is a service for building whatever you want. It has build in the title but it can really do anything you like. And it has a free plan that lets you run 100 minutes per month. So while FarGate (AWS’ serverless container offering) has no free plan this could be used instead for those short running processes that need to be in a container.

There is web-based UI for creating a CodeBuild project, but using that you are guaranteed to forget what you did 2 months later forcing you to relearn from scratch when you need to set it up again. So I like to use Terraform to configure the project.

If you don’t have HashiCorp’s Terraform installed, get that working and then come back here. Now create a file with a .tf extension. I called mine gbuild.tf. This used the AWS provider so you can start by adding the following content

provider "aws" {
    region     = "us-east-1"
    access_key = “myaccesskey”
    secret_key = “mysecretkey”
}

Since I use AWS-Vault from 99Designs, I leave out the access and secret key. If you are using more than a single AWS account, using AWS-Vault makes it incredibly easy to swap out accounts and I don’t have to edit files before accidently sharing secret info in a GitHub repo.

Next I need to create an IAM role which CodeBuild will use to access my ECS Repository and write to the logs. Below the AWS block, add this:

resource "aws_iam_role" "codebuild_role" {
    name = "codebuild-role"
    assume_role_policy = <<ENDPOLICY
{
    "Version": "2012-10-17", 
    "Statement": [
        {
            "Effect": "Allow", 
            "Principal": {
                "Service": "codebuild.amazonaws.com"
            }, 
            "Action": "sts:AssumeRole"
        }
    ]
}
ENDPOLICY
}

On its own, the role cannot do anything yet. You need to add some policy statements:

resource "aws_iam_policy" "codebuild_policy" {
    name = "codebuild-policy"
    path = "/service-role/"
    policy = <<ENDPOLICY
{
    "Version": "2012-10-17", 
    "Statement": [
        {
            "Effect": "Allow", 
            "Resource": [
                "*"
            ], 
            "Action": [
                "logs:CreateLogGroup", 
                "logs:CreateLogStream", 
                "logs:PutLogEvents", 
                "ecr:GetAuthorizationToken", 
                "ecr:InitiateLayerUpload", 
                "ecr:UploadLayerPart", 
                "ecr:CompleteLayerUpload", 
                "ecr:BatchCheckLayerAvailability", 
                "ecr:PutImage"
            ]
        }
    ]
}
ENDPOLICY
}

So now we have a role, and we have some policies, but the policies are not yet associated with the role. So adding the next block links everything up:

resource "aws_iam_policy_attachment" "codebuild_policy_attachment" {
    name = "codebuild-policy-attachment"
    policy_arn ="${aws_iam_policy.codebuild_policy.arn}"
    roles = ["${aws_iam_role.codebuild_role.id}"]
}

Notice that roles = ["${aws_iam_role.codebuild_role.id}"] line. This assigns the ID generated when we created the role above to the policy statement. You will see a lot of stuff like this when you work with Terraform.

Go back a couple code blocks. Usually when I start working with AWS, I start with no access and then when I hit a problem, I try again with the specific access it was complaining about added to the role. Some folks would just add an *, giving them all rights to do anything they like. Generally thats a bad idea. A role should just have the access it needs to do its thing and no more.

Now lets move on in the terraform file to add the actual CodeBuild project:

resource "aws_codebuild_project" "buildtechnovangelistbuilder" {
    name = "buildtechnovangelistbuilder"
    description = "CodeBuild project to build technovangelist.com"
    build_timeout= "5"
    service_role="${aws_iam_role.codebuild_role.arn}"

    artifacts {
        type = "NO_ARTIFACTS"
    }

    environment {
        compute_type = "BUILD_GENERAL1_SMALL"
        image = "jch254/dind-terraform-aws"
        type = "LINUX_CONTAINER"
        privileged_mode = true     # don't set this and you get errors on the install
    }

    source {
        type = "GITHUB"
        location = "https://github.com/technovangelist/technovangelist-build.git"
        buildspec = "builder-buildspec.yml"
    }
}

There is a lot here so let’s walk through it from the beginning. First I am creating a project called buildtechnovangelistbuilder. It has a timeout of 5 minutes and will use the role created above to execute the project. Typically a CodeBuild project has some sort of output which can be processed in some way. These are called artifacts. I configured the project to know nothing about any artifacts so it will be my responsibility to deal with any output myself.

Next is the environment sub block. There are 3 compute types I can use. The free plan only works if you choose the smallest option: BUILDGENERAL1SMALL. The image being used is jch254/dind-terraform-aws. I found this to be a nice repo that knows about AWS CodeBuild and Docker already so it’s very easy to use for building new images on CodeBuild.

It’s incredibly important that you set privileged_mode to true. If you don’t you WILL get errors and they WILL NOT make any sense.

The source sub block set’s the Github repo to perform the build on. CodeBuild will start a Docker Image and will run whatever is defined in the buildspec file defined.

And that is everything in my Terraform file. So lets move on to the buildspec file. The file is made of a series of phases, each with commands. You could have everything in a single phase, but splitting things out makes it a little easier to see where the actually problem is with a broken build.

So here is my buildspec file:

version: 0.2

phases:
install:
    commands:
    - nohup /usr/local/bin/dockerd -G dockremap --host=unix:///var/run/docker.sock --host=tcp://0.0.0.0:2375 --storage-driver=overlay&
    - timeout -t 15 sh -c "until docker info; do echo .; sleep 1;done"
pre_build:
    commands:
    - echo Logging in to Amazon ECR...
    - $(aws ecr get-login --no-include-email --region us-east-1)
build:
    commands: 
    - echo Building the Docker image...
    - docker build -t mattw-technovangelist-builder .
    - docker tag mattw-technovangelist-builder:latest 133160321634.dkr.ecr.us-east-1.amazonaws.com/mattw-technovangelist-builder:latest 
post_build:
    commands:
    - echo Pushing the Docker image...
    - docker push 133160321634.dkr.ecr.us-east-1.amazonaws.com/mattw-technovangelist-builder:latest

The first phase is install. Now I didn’t really do anything here. I just looked at the docs for the docker image I am using (jch254/dind-terraform-aws) and did what it told me. For prebuild and build and postbuild, again I didn’t really do anything from scratch, I just read the instructions in the ECS Repository. In fact, before you can run this, you need to create the ECS Repository. When you do, you will see the 4 commands you need to run to add a new image to the repository. So really, this entire file is a matter of copy and paste. No thought involved.

The Dockerfile is where things get interesting. In the build section above, I am building a docker image based on the Dockerfile in the current directory. I want this image to be as small as possible and to already have all the pre requisites installed. If you don’t deal with the pre reqs, you need to install them on every build. And if your image isn’t as small as possible, you need to wait a little extra to load the image. And in this case, time really is money. Not a lot of money, but it could make the difference between paying nothing and paying a couple bucks per month. My Dockerfile uses something called a multi-stage build. You build an image, installing everything you need. You then build a second image, just copying the results into the new image. Doing this gives you all the features you need without any of the extra cruft that came with the install process. So here is my Dockerfile:

FROM mhart/alpine-node:8

WORKDIR /app
COPY package.json ./
RUN npm install --global gatsby-cli
RUN yarn install --production
RUN apk -Uuv add python curl && \
    curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"&& \
    unzip awscli-bundle.zip && \
    ./awscli-bundle/install -b /usr/bin/aws && \
    rm awscli-bundle.zip && rm -r awscli-bundle


FROM mhart/alpine-node:base-8
WORKDIR /app
COPY --from=0 /app /backupmod
COPY --from=0 /usr/bin/ /usr/bin/
COPY --from=0 /usr/lib/ /usr/lib/
COPY --from=0 /root/.local /root/.local

I first copy the package.json file and then install everything in it along with the gatsby-cli. I then add a few other things needed for the aws cli. Then copy all the resulting executables into the new image. And thats it. Some of this won’t quite make sense until you see the actual build, but I will explain that in another post.

If you want this to run every time you make a change to the GitHub repo, you will need to do that in the UI. That’s the one thing you cannot do from the Terraform file. But that seems a small price to pay to get a lot of repeatable functionality.

January 08, 2018

Working with Workman

A few years ago I made a change in the way I work. It's a change that is generally positive with regards to my productivity, but I would actively advise you not to go that way. I changed my keyboard layout. Changing your keyboard layout is drastic; I had no idea how much of a brain shift would be required. I type faster than I ever did, but the bottleneck now is not my fingers but rather thinking about what I want to say and how I want to say it. That said, if you have made a switch, or are getting started with that switch, here is how I deal with it on my Mac with multiple keyboards.

My daily computer is a 2017 MacBook Pro with TouchBar. This is by far the best Mac I have ever used. I think it's finally better than my old 17" MBP. The keyboard is dreamily wonderful, the touch bar transforms the top row into something that is useful rather than wasted space. And the trackpad finally makes me not long for the old ThinkPad tracksticks. But this post isn't about my Mac. I want to show you the tools I use to make it easier to work with my chosen keyboard layout.

I use Workman, and more specifically Workman Dead. Workman was created by OJ Bucao to help him deal with his RSI and to overcome the weaknesses of Colemak which had the same goal against Dvorak. As long as I was making a switch, I figured I should choose something that was the most efficient possible. Workman Dead adds a 'dead key' that changes the layout when I hit the comma key. As soon as I press any key after comma, the layout goes back to the standard Workman. I can type most of the common punctuation and symbols without taking my fingers far from the home row. Kind of amazing.

There are lots of statistics put out by many folks on what is easier and harder to type with. But many of those assume that all fingers have the same characteristics and all humans have the same configuration of fingers. In fact some folks claim that certain key combinations are hard on different layouts and I find them to be easy. Others are supposed to be easy and they are hard for me. So both assumptions may or may not be true for you: take the stats as a starting point and try them out yourself.

Unfortunately, OJ Bucao seems to have distanced himself from the project but http://workmanlayout.org/ is still going and supporting users that want to start working with the layout. You can download the Workman for MacOS, Windows, and Linux and if you are only using a single keyboard, thats all you need to think about.

If you are only using keyboards with the same key layout, you are good too. Because a US QWERTY internal keyboard looks the same as a US QWERTY external keyboard. But thats not what I have. So yes, the keyboard on the MBP is a standard US layout. Use the workman layout files and change it in Keyboard Preferences and everything is all set. But the external keyboard I use at work is an ErgoDox EZ. I have had this for a few years, and it is amazing. The firmware allows you to remap the keyboard so that the keys are anywhere you want. And it supports many layers, so click a key and you could have a newly remapped keyboard, no software needed. Mine is with Workman. So when plugged into my Mac or anyone else's Mac, just leave it on US and I can type with the Workman layout.

That's pretty awesome. But as soon as I unplug and move to a different room, I have a US layout keyboard to deal with, so switch to Workman and as soon as I plugin I have a layout that is neither Workman nor US so everything is screwed up. So I need a few things to happen to make this automatic.

First I started with a pair of simple scripts and a utility by Wolfgang Lutz. Lutz, or Lutzifer on GitHub created Keyboard Switcher. Initially I had an issue with it but it was quickly resolved. So using these two super simple scripts I am good to go:

#!/bin/bash
# Switch to US
/usr/local/bin/keyboardSwitcher select "U.S."

And

#!/bin/bash
# Switch to Workman
/usr/local/bin/keyboardSwitcher select "Workman-Dead"

But that meant I needed to trigger the right one, so I tweaked it to this:

#!/usr/local/bin/fish
function keyswitch
  set -l currkey (keyboardSwitcher get)
  switch $currkey
    case 'Workman-Dead'
      /usr/local/bin/keyboardSwitcher select "U.S"
    case 'U.S.'
      /usr/local/bin/keyboardSwitcher select "Workman-Dead"
  end
end

keyswitch

You can see that I use Fish for my shell, another story for another time. It uses Keyboard Switcher to get the current layout then uses it again to set it to the other one. That can easily be added to a keyboard shortcut with something like Keyboard Maestro which is what I used to do. And then at some point I found Hammerspoon which lets you automate the Mac with some simple Lua scripts. Here are the simple, albeit convoluted, script I use to automate Keyboard Switcher with Hammerspoon.

function setUSLayout(  )
  hs.execute("keyboardSwitcher select \"U.S.\"", true)
end

function setWorkmanLayout(  )
  hs.execute("keyboardSwitcher select \"Workman-Dead\"", true)
end

function setKeyboard()
  local devs = hs.usb.attachedDevices()
  if keyValueExists(devs, "productName", "ErgoDox EZ") then
    setUSLayout()
  else
    setWorkmanLayout()
  end
end

function usbDeviceCallback(data)
    if string.match(data["productName"], "ErgoDox EZ") then
      setKeyboard()
    end
end

usbWatcher = hs.usb.watcher.new(usbDeviceCallback):start()

So at the bottom I create a watcher that gets triggered when a USB device is changed. usbDeviceCallback checks if the device that changed was the ErgoDox keyboard (the change could be that the keyboard was added or removed). That calls setKeyboard(). I think I used to do things with other USB devices as well. setKeyboard checks if the ErgoDox is attached right now. If it is, then set the US layout. If not, go to the Workman Dead layout.

So now no matter which keyboard is attached, I have the right layout configured. It took a while to get it working the way it is now, but now things just work and its beautiful. I hope that is useful to someone other than just me.

November 01, 2017

CSV Lookup with Typinator for Working with Repetitive Forms

Datadog just had it’s third customer summit this week in Austin Texas. I gave two workshops there to help customers get up to speed on Datadog, and with monitoring Kubernetes. To reduce the number of bottlenecks the students had to deal with, I needed to setup about 110 trial accounts. The web form we have for setting up a trial account is great for setting up one, or even a dozen accounts, but setting up 110 is not supposed to be easy. There was probably a person I could work with to configure this, but I just wanted to get it done. You probably have a form that you have to fill out that is a bit tiresome too.

There are many tools I could have used but I decided to go the text expansion route. All the tools in this space replace a small bit of text you write with a much bigger block of text. A year ago I would have done this with the king of the genre, TextExpander. But then they went the subscription route and I wondered if their was an alternative that was almost as good. Instead I found a competitor that does everything and more that TextExpander, for the single price: Typinator.

Typinator basics

It’s sometimes not as pretty to look at, but I can do the typical text expansion, like printing today’s date when I type ddate, or a standard email response when I type a small string. And I can also do expansion with forms. This means I can type something like :userand see a simple form asking me for an integer, and expand out to a username, email, and organization that uses the same base strings with my integer added to make it unique. I can also type a pattern of text and it will automatically expand to a variation on the text. For instance if I type the 3 digit, 2 digit, 4 digit pattern of a social security number, the RegEx pattern can trigger a text expansion.

Typinator functions

Things get magical when you add functions. Typinator allows for functions written in a variety of languages to act on the text you type, or collect in a form. So I can interact with the system to get really interesting expansions. Not only did I need to create a bunch of accounts, but I also needed to record the information, including passwords so I could give it to the students. So my solution was to write a little program to spit out a CSV file with usernames, email addresses, organizations, and unique passwords for 110 users (Actually I needed 100 but I screwed up nearly ten so it made sense to go a bit further). And then use Typinator to read that CSV file based on the number I enter in a form.

That function in detail

Let’s get a bit more specific. To fill in the form, I need a unique username. Then tap the tab key to move to the next field where I type an email address (I used my email address with the + trick so I can filter any emails that come in for that user (many email platforms let you add a + followed by some text to create a unique email address that still goes to you. So if your email is joe@company.com, you can sign up to Slack with joe+slack@company.com and emails still come to you, but they are a bit easier to filter) . Continue with organization then password. Finally press enter to go to the next step.

I went into the Typinator UI and created a new expansion. My abbreviation text is dduser. Then the text it expands to is

{{usernumber=?User Number}}AustinSummitUser{{usernumber}}{tab}mattw+AustinSummit{{usernumber}}@datadoghq.com{tab}AustinSummitUsers{tab}{/Python
import sys
import csv

userreader = csv.reader(open('/Users/mattw/projects/other/ddaccountmaker/users.txt', 'r'), delimiter = ',')
for row in userreader:
    value = row[0]
    prevalue = 'SummitTraining-'
    testvalue = prevalue+str({{usernumber}})
    if value == testvalue:
        sys.stdout.write(row[3].strip())
}

A bit more detail

OK, so what’s going on there. {{usernumber=?User Number}}tells Typinator to open a text entry form to collect some text to be saved in the usernumbervariable. The field on the form has the label User Number. Then AustinSummitUser{{usernumber}} will type AustinSummitUser01 (assuming I entered 01 for User Number). {tab}means hit the tab key which moves the cursor to the next field over.

The function is written in Python. It opens the file on the file system called users.txt and reads through until it finds a user called AustinSummitUser followed by the number I entered. When it finds the right line, it returns the third field which is the unique password. Sure, its kinda simple, but this is super powerful.

Let's reuse this to login

Since I have the CSV file, I can create another expansion to login as any of my trial users. So when I type ddloginthis expansion is processed:

{{usernumber=?User Number<{{#usernumber+1}}>}}{/Python
import sys
import csv


userreader = csv.reader(open('/Users/mattw/projects/other/ddaccountmaker/users.txt', 'r'), delimiter = ',')
for row in userreader:
    value = row[0]
    prevalue = 'SummitTraining-'
    testvalue = prevalue+str({{usernumber}})
    if value == testvalue:
        sys.stdout.write(row[1].strip())
}
{tab}
{/Python
import sys
import csv


userreader = csv.reader(open('/Users/mattw/projects/other/ddaccountmaker/users.txt', 'r'), delimiter = ',')
for row in userreader:
    value = row[0]
    prevalue = 'SummitTraining-'
    testvalue = prevalue+str({{usernumber}})
    if value == testvalue:
        sys.stdout.write(row[3].strip())
}
{return}

This collects the user number, then spits out the email and password. I lookup both values because the pattern changed halfway through creating the users. I initially thought I was setting up for 40 users then expanded to 100, though perhaps I should have planned for 200. And when I added those 60 other users I changed the email pattern. I have no idea why, but the change made a lot of sense, as these things tend to do, at 2AM.

Several times in the workshops, I had to login as different users to see what was going on. Being able to type ddlogin followed by 64 made logging in to that user’s account super quick and easy.

Have you used Typinator?

Are you using Typinator? Have you done similar things with it? I would love to hear more about what you are up to with it so send me at tweet @technovangelist.