ThatConference 2017 – Day 1

It’s another year at That Conference! I’m excited to be back again this year though I am bummed to not be speaking. Here is a run down of my reflections on the first day.

Forget Fight or Flight…Fight Fear

This morning’s keynote was put on by a good friend and mentor of mine, Brian Hogan. His talk was a call to action for us as an industry to work together to reduce fear. Brian and I have had many conversations in the past on a similar topic. Every time I chat with him I am energized and ready to go to battle. As I struggled with fear and doubt over the past couple months in my job search, this talk really hit home. It even fed a little into the open space I facilitated. One of the main take aways I got from Brian is that fear is real and it is crippling. It’s not something to take lightly and brush off your shoulder. It’s something that can impact your physical and mental health.

To beat fear we need to all work together and recognize when someone may be struggling with a fear. Look at your co-worker during a meeting, are they sitting quietly when a topic comes up that you know they are passionate about? They may have a fear of speaking up because they fear that if they make the wrong comment or pick the wrong solution they could lose their job. Run with that thought, now what if they are the sole breadwinner for the family and provide the health care. Add to that a really bad situation like their partner or child is chronically ill. That fear of losing a job could make the company miss out on a great opportunity.

When we notice situations like this we should work to make the environment safe and collaborative.

Route-planning your monitoring stack climb

After the inspiring talk by Brian, I headed over to hear Jamie Riedesel talk about planning out monitoring initiatives. Jamie did a wonderful job breaking down all the thoughts and questions that should go into monitoring.

I took away 3 major points to use as some goals in my next monitoring and alerting project:

  • Make every alert actionable
  • Make every alert specific
  • Don’t allow alerts to be ignored, not having actionable and specific alerts will do this.

In the past, I have been guilty of creating “alerting spam”. I’ve created alerts with the intention of “cleaning them up later”, you can guess the number of times I’ve cleaned them up.

Having a good plan and making sure that these 3 points are accounted for will go along way to creating successful alerts.

Intro to Docker

I dropped into this talk towards the end and was glad that is when I did show up. The main meat of the talk was something I have already experienced. The portion of the talk description that caught my attention was deploying it to Azure and AWS. I was really impressed with the DockerHub and Azure integration.

Open Space: Developer Interview Process

After having spent 2 months looking for my next adventure I wanted to share what I had just been through and also talk about how to make interviews for developers much more consistent. A side benefit was to gain some information to share with my students who are heading out into the workforce.

The conversation was really interesting in that there were a few folks that were hiring managers. Their perspective came from working at large companies where the resumes they got passed through HR screeners and it was a major frustration. Often the thought that they lost out on potential candidates and also got the wrong candidates through this process.

We talked about how to evaluate someone’s technical skill level and fit for a specific job fairly and really struggled to come up with anything different than the broken pattern of a take home exercise.

Overall the first day was fun and exciting. I didn’t even finish this article before crashing into bed.  More to come from day 2!

One week in

Well after 2 months of interviewing and applications submitting I landed a new gig! I was able to join the team at healthfinch as the Operations Manager. The opportunity gives me the chance to work with some talented folks on a platform aimed to increase the quality of care and reduce the number of busy work tasks doctors and nurses have to complete on a daily basis.


Looking for the next career challenge

I never thought I would be publicly on the look for a new gig, but here I am. I went on a rollercoaster ride the last few months and have exited the ride and am in search of a new thrill.

As I have progressed in my career I feel ever more confident in the set of skills I have acquired and my ability to use them. Reflecting on my accomplishments over the past few years I am inspired to continue to add to that list. Those accomplishments all have come through hard work and collaboration. I love collaborating in some capacity, everything from being in a room or video call with someone to pair programming to a discussion over pull requests.

I’ve reflected also on how much I have learned over the years and this also inspires me to keep growing as a person, developer, and manager. I care about helping mold quality people and quality software. I find the best way to do this is with constant feedback and education. I love being a part-time instructor at the local community college, there is a feeling of accomplishment I get vicariously through my students as they achieve success.

My time in the classroom impacts my daily work in so many ways. One of the biggest lessons I have learned is how to communicate to someone your idea in a way that is empathetic and compassionate. Much like the co-workers I have had or employees I have managed, students come to me with different experiences. These are both life and education experiences, some know object-oriented programming others don’t. Some have used an operating system other than Windows, some not. The list can continue on. This poses a challenge when giving assignments and even during lecture and lab times. Through practice and constant feedback from students and employees, I have developed a communication platform that consists primarily of storytelling and analogies.

Storytelling allows me to frame the content in a way that is best fitted for the individual, in order to do that I need to have a good understanding of the person that I am communicating with. Building a relationship to a level that allows me to communicate effectively has the side effect of building trust with the person. This trust increases their comfort in speaking up when they don’t understand something, knowing I will find another way to communicate with them.

As I move forward in my career I am looking for a place that will allow me to use all my skills and acquire new ones. I’m open to opportunities that push me out of my comfort zone and give me more perspectives to attack problems with. I have really enjoyed my time with Ruby on Rails and specifically the Ruby language. I see new things on the horizon like ELM and WebAssembly along with tools like React Native which is gaining traction and I’m excited about something new.

If you are looking for someone like me, please contact me chris at You’ll find my resume related information on LinkedIn and code related information on Github. My course materials are out on my course Github organization including class samples and lectures.

Twin Cities Code Camp 19 – Recap

Hey all,

This past weekend I made the 5 hour drive up to Minneapolis, MN for the 19th edition of Twin Cities Code camp.  I had the opportunity to speak there with a long time friend.  We demonstrated how to build an app using the Phoenix web framework and then adding Angular on top of it. We did a screencast style recording of the presentation which is linked below.

I also went to a couple sessions while I was there, one on developing apps for the Apple TV and another on managing technical debt.

The Apple TV session was interesting as it was the first time I had heard about TVMLKit from Apple.  The TL;DR is you can now write Apple TV apps using remotely hosted JavaScript.

The session on managing technical debt was interesting in that I heard a large amount of horror stories that made me really appreciate working at Getty along with specifically all the things we have done in the Madison office to continue to work on paying our technical debt off.

Then I got to enjoy this tasty beverage at the speaker dinner:


Find and replace in directory with grep and sed

Recently I needed to find an replace a method name in a project as part of a re-organization. So rather than just using an editor to do it for me I figured I would spend some time with a small science project on the command line.

What I was able to come up with is a combination of using grep to find all the files that had the method name in them and pipe that list to set to do the global substitution.

grep -lr -e 'bad_method_name' * | xargs -n1 sed -i '' 's/bad_method_name/new_method_name/g'

As you we see in the code above we have our command starting with grep, looking for ‘bad_method_name’ in * and are piping that to sed.  With sed the only issue I had was with the -i flag.  At first I didn’t specify anything for that flag and had errors in OS X about a malformed sed command.  Passing a blank string to it fixed that.

An open letter to Frontier Communications

Dear Frontier,

I am writing this open letter as a way to communicate my recent experiences with Frontier Communications.

On the 9th of September, my wife received a call from a regional manager that our service could be upgraded. She told the manager that I was not home and would call back later to find out more.

I called on the 10th of September and talked to a CSR that made my dreams come true, I was able upgrade from 3 megs down to 7 megs down. I was excited after being a Frontier/Verizon customer for almost 5 years I was finally going to get a speed increase!

I was told that I could see my new speed with in 1-2 business days and would receive a new DSL modem in the mail around there also. On the 12th of September, I wasn’t seeing any speed increase so I called tech support. They told me that everything looked good and I was provisioned for 7 meg service, but I should wait for the new modem to get there and install that to see if that would help.

The new modem came on the 13th of September, I installed it and only hooked up one computer. I ran a speed test and again was only seeing a little over 3 megs down. I work in IT (consultant, and software developer) so I understand that I will not get the full 7 megs down, but expected to get a consistent 6. So on the 20th I cleared some time and again called the tech support group and was assured that I was provisioned for 7 megs and there must be an issue on Frontier’s end. He told me that he had filed a ticket and the issue should be resolved by the evening of the 23rd.

On the 23rd a field support tech came to my house and spoke with my wife, everything was good on their end and told us that we were not eligible for 7 meg service. He instructed her to have me call back to customer support.

Shortly after the tech left, I called customer support and was then told that there really isn’t anything I can do, my old plan is no longer valid (it was a grandfathered Verizon plan) and I was stuck. They then told me they did take the 2 year agreement off of my account but that there is nothing else I can do.

So now I’m stuck paying $5 a month more for no more speed. I work from home from time to time and also am part of an On Call rotation. My work cell phone works over VOIP and I regularly lose either the phone call or my VPN connection when I am troubleshooting an issue in the middle of the night.

I was given the option of adding a second phone line and dsl line for $23 plus taxes and 911 fees which based on my current bill total about $16. The down side of this is I would need to purchase a WAN balancer or managed switch in order to allow my devices to talk to each other and use both internet connections. I was hoping that you could provide a WAN load balance solution but it sounds like I am on my own.

From here I do not know where to go. The only consolation I was given today was a one time $15 credit which will cover my increased bill cost for 3 months while I try to figure out what to do.

I had looked forward to the day when I could publicly thank Frontier for upgrading the service in my area, however after 5 years of the same speed I’m sure it will be much longer before I can do that.

– One disappointed customer

Rails 3, SOAP and Testing, Oh My!

This past week at work I have had the “pleasure” of building out a SOAP endpoint for an internal system. This has caused me to find a wonderful new gem Wash Out ( With a feature comes with new test and with my first SOAP endpoint comes, how to test SOAP endpoints.

Testing a Wash Out controller wasn’t something that was blaintiantly obvious to me and took some experimenting and discussions with Boris Staal (@_inossidabile). Below is an example of how we settled on testing. This may not be the end all perfect solution but hopefully it will help you get started.

Let’s start with a sample controller, this will give us a base to refer to with our tests.

class API::GasPrices < ApplicationController
  include WashOut::SOAP
  soap_action "GetGasPrices", 
              :args   => {:postal_code => :integer}, 
              :return => :string
  def GetGasPrices
    render :soap => GasPrices.find_all_by_postal_code(params[:postal_code]).to_xml.to_s

This controller is a fairly standard example, it has one method GetGasPrices and takes a postal_code as an argument. It returns a string of gas prices.

One of the things that we got caught on was how to actually hit the Wash Out gem and execute the HTTP request. To do that we’ll need to mount our app as a rack app.

We’ll need to make sure that we are using a version of the HTTPI gem that can use a Rack adapter. Right now we need to point our HTTPI gem at the GitHub repo. For the actual testing of making SOAP calls we can use Savon.

gem 'httpi', :git => ''
gem 'savon'

Next we’ll need to create a spec file for our tests. For this example let’s use a request spec, even though this is a controller we actually want to make a request with SOAP to make sure our methods are recieving information correctly.

| |+requests/
| | |+API/
| | | |-gas_prices_controller_spec.rb

Let’s setup our spec file now, we’ll need to require Savon and the spec_helper.
Then create a describe block like below.

require 'spec_helper'
require 'savon'

describe API::GasPrices do
  HTTPI.adapter = :rack
  HTTPI::Adapter::Rack.mount 'application', MyGasPriceChecker::Application

  it 'can get gas prices for a postal code' do
    application_base = "http://application"
    client ={:wsdl => application_base + api_gas_prices_path })
      result =, :message =&gt; { :postal_code => 54703 })
    result.body[:get_gas_prices_response][:value].should_not be_nil

Inside of our describe block we are using the HTTPI rack adapter, and then configuring that adapter to mount our MyGasPriceChecker as application. This will give us the ability to use the url http://application. In our test we’ll create a new Savon client, this client will need to access our WSDL so it can find the operations it has access too.

Once we have a client created our code can now actually call the GetGasPrices SOAP method. Our test then verifies that the value of our response is not nil, this is really just a starting point and we can iterate going forward to test actual return values.

Using Virtualbox for development VMs

If you’re like me sometimes I just find it easier to use a Virtual Machine for doing development work, especially when it is a complex system with many moving parts.  Recently I started to work on a project that I had inherited from another developer.  The project was partially setup on a remote development server.  I wanted to under stand how the pieces went together and be able to pass it along to other co-workers, so I decided to build a VM.

I have been a heavy VMware Fusion user for many years yet not everyone in my office has the luxury of a license for it.  I decided to give Virtualbox a go for this project.

At first I was really happy with it, I got my machine up and running in no time and was working away until I came to setting up the server and realized for the sake of sane hostfile management I wanted it to have a static ip.  I decided to switch the network adapter over to NAT which is where the pain began.

I enjoy just shelling into my machines and working that way, once I switched to NAT I could not just shell to the ip address of the vm.  This seemed odd to me, VMware creates a virtual network for you which apparently Virtualbox does not.  So I went diving and here is what I found:

You’ll need to enable port forwarding to the vm’s NIC. To start open up the network configuration section of your vm.

The Virtualbox network interface menu

After opening the network tab, click on the “Port Forwarding” button under the “Advanced” section.

Then fill out the sections with the relative information.  I found that if you try and put in a Host IP the setup doesn’t work quite right.  Here I’m forwarding my localhost port 2222 to port 22 on the guest vm along with port 8080 to port 80.

Now from my host machine I can use the following shell command to ssh in $ ssh aip@localhost -p 2222 which will use the port-forwarding and let me access my machine.

The same logic applies to trying to view the site running on the vm, in my browser I just have to access http://localhost:8080.

Now this is a bit of work but it is the trade off for a free tool for virtualizing your environments.  It’s working for me right now but your mileage may very.

Pairing on a Severity 1 issue

Recently at work I had to help respond to a Severity 1 issue.  This is our worst case scenario, something major is broke in production and is costing the company money.  In the presentation I gave at Twin Cities Code Camp about pair programming I said that often troubleshooting bugs and fixing production issues weren’t the easiest to pair on.  Reflecting on the past couple days I noticed that while trying to fix the issue at hand we were pairing the whole time.  I don’t think we could have accomplished the fix without the entire team using some of the techniques I had outlined in my presentation.

We used two of my local pairing techniques along with two remote pairing tools. Locally we used a mixture of traditional paring and “Divide and Conquer” pairing.


With traditional pairing one person is “driving” while the other person “navigates”. As I sat in the driver seat with vim open and tailing a log on production my manager sat next to me helping navigate through the code as we figured out what needed to be modified. I used to think that I liked to work on these high stress issues alone and troubleshoot things with my own process, but now have a different opinion.  Having someone there to limit the amount of thrashing was a major help.

Divide and Conquer

With a major issue there are lots of logs to check, experiments to try, and pieces of a system to update.  This is where we brought another developer so we could divide up the work and conquer the problem.  While I updated configs, he updated our applet code.  By the time I was done getting the configs ready he had the applet built and ready to be pushed out.  We were in constant contact sitting next to each other but were able to work in parallel to finish the one task.

Remote Tools

Working with a team distributed around the world makes troubleshooting major issues difficult.  I was able to keep everyone on the same page by using a combination of a screen sharing tool and tmux.  I used to share a remote desktop session from London.  From the remote desktop I was able to use putty to connect to a tmux session on my own machine.  This way I could show code and logs to the entire team.  Using tmux allowed me to reboot the machine and pick up and start running very quickly.

Using pairing techniques and tools helped us diagnose the problem and solve it.