Fixing mass amounts of merge conflicts with Git

At work I use VCR to capture output to make cucumber test runs faster.  One problem is that these cassettes expire and cause merge conflicts.  I recently found a solution to clean those up with grep and git.  Let’s look at the command below:

$ grep -lr '<<<<<<<' . | xargs git checkout --ours

Here we see using grep to search in files for  ‘<<<<<<<’ and then use git checkout –ours to use our version of the conflict. We could also use –theirs if we wanted to use their version of the commit.

Find and replace in directory with grep and sed

Recently I needed to find an replace a method name in a project as part of a re-organization. So rather than just using an editor to do it for me I figured I would spend some time with a small science project on the command line.

What I was able to come up with is a combination of using grep to find all the files that had the method name in them and pipe that list to set to do the global substitution.

grep -lr -e 'bad_method_name' * | xargs -n1 sed -i '' 's/bad_method_name/new_method_name/g'

As you we see in the code above we have our command starting with grep, looking for ‘bad_method_name’ in * and are piping that to sed.  With sed the only issue I had was with the -i flag.  At first I didn’t specify anything for that flag and had errors in OS X about a malformed sed command.  Passing a blank string to it fixed that.

Remote Pairing: SOCKS Proxy

I’m a big fan of remote pair programming. The problem I often run into is the need to share a web browser. In the past I have always jumped strait to a screen sharing tool like Join.me or TeamViewer. The problem here is these technologies tend to use quite a bit of bandwidth.

The solution recently I have found is to use a SOCKS proxy to the hosting machine.

The proxy allows us to tunnel our traffic through that person’s machine and therefore allowing you to access things local to them like a VM which may not be publicly accessible.

This solution also has the added benefit of the remote pair not needing to do anything but have an ssh server running.

To start a tunnel we’ll need to use the following command but with some substitutions:

$ ssh -p pairing_server_ssh_port -N -D 9999 ssh_user@pairing_server

Let’s explore this ssh connection string!

First the -p flag followed by the port your pair is running their SSH server on. This flag is optional if your pair is using the standard port 22.

Next we include the -N flag which which tells ssh client to not execute remotely which keeps the process running in our shell allowing us to close the tunnel with a ‘Control+c’.

After that another flag is the -D which actually is the SOCKS proxy flag. This flag takes a port number as an argument. This port number needs to be open on your machine. It is the port that your browser will connect to to tunnel traffic to your pair’s machine.

Then we’ll need to finish filling out the username and server information.

Once you have the command filled out hit enter to start the tunnel running.

After our tunnel is running we just need to configure our browser to use the SOCKS proxy.

The easiest browser to setup for a SOCKS proxy is Firefox so we’ll need that installed on our machine.

Next we’ll need to open Firefox preferences and navigate to the ‘Advanced’ section (1). Once there we’ll want to go to the ‘Network’ preferences (2) and choose the ‘Connection’ settings (3).

Now we’ll specify the Manual proxy and host for the SOCKS server as seen below:

After clicking OK and closing all the preference windows you’ll be able to use Firefox to browse the web through your pair’s internet connection. Like always when remote pairing, etiquette is key and make sure you are using this browser for activities that you need to and not for internet searching, email, etc. All of your traffic for this browser will be going over this tunnel and through your pair’s machine.

Using this tool in-conjunction with Joe Kutner’s Remote Pairing: Collaborative Tools for Distributed Development tmux pairing setup will give you a very low bandwidth solution to web development remote pairing.

Tunneling iTunes through SSH

I was looking for a way to listen to my Christmas playlist running on my Mac Mini at home while at work. Through a little bit of searching I found this post ‘Tunneling iTunes through SSH‘.

I took it a bit further and added the following to my ~/.ssh/config

Host itunes-home
  Hostname 
  User 
  LocalForward 3690 localhost:3689

This allows me to start up my tunnel with a simple

$ ssh itunes-home

and then run the command in the ‘Tunneling iTunes through SSH

dns-sd -P iTunesServer _daap._tcp local 3690 localhost.local 127.0.0.1 &amp;

and then I see iTunes server in my shared libraries list.

2013-12-11 at 8.35 AM

 

 

 

 

 

 

 

 

 

 

Now I can listen to my Christmas music at work without having to copy it to my laptop!

An open letter to Frontier Communications

Dear Frontier,

I am writing this open letter as a way to communicate my recent experiences with Frontier Communications.

On the 9th of September, my wife received a call from a regional manager that our service could be upgraded. She told the manager that I was not home and would call back later to find out more.

I called on the 10th of September and talked to a CSR that made my dreams come true, I was able upgrade from 3 megs down to 7 megs down. I was excited after being a Frontier/Verizon customer for almost 5 years I was finally going to get a speed increase!

I was told that I could see my new speed with in 1-2 business days and would receive a new DSL modem in the mail around there also. On the 12th of September, I wasn’t seeing any speed increase so I called tech support. They told me that everything looked good and I was provisioned for 7 meg service, but I should wait for the new modem to get there and install that to see if that would help.

The new modem came on the 13th of September, I installed it and only hooked up one computer. I ran a speed test and again was only seeing a little over 3 megs down. I work in IT (consultant, and software developer) so I understand that I will not get the full 7 megs down, but expected to get a consistent 6. So on the 20th I cleared some time and again called the tech support group and was assured that I was provisioned for 7 megs and there must be an issue on Frontier’s end. He told me that he had filed a ticket and the issue should be resolved by the evening of the 23rd.

On the 23rd a field support tech came to my house and spoke with my wife, everything was good on their end and told us that we were not eligible for 7 meg service. He instructed her to have me call back to customer support.

Shortly after the tech left, I called customer support and was then told that there really isn’t anything I can do, my old plan is no longer valid (it was a grandfathered Verizon plan) and I was stuck. They then told me they did take the 2 year agreement off of my account but that there is nothing else I can do.

So now I’m stuck paying $5 a month more for no more speed. I work from home from time to time and also am part of an On Call rotation. My work cell phone works over VOIP and I regularly lose either the phone call or my VPN connection when I am troubleshooting an issue in the middle of the night.

I was given the option of adding a second phone line and dsl line for $23 plus taxes and 911 fees which based on my current bill total about $16. The down side of this is I would need to purchase a WAN balancer or managed switch in order to allow my devices to talk to each other and use both internet connections. I was hoping that you could provide a WAN load balance solution but it sounds like I am on my own.

From here I do not know where to go. The only consolation I was given today was a one time $15 credit which will cover my increased bill cost for 3 months while I try to figure out what to do.

I had looked forward to the day when I could publicly thank Frontier for upgrading the service in my area, however after 5 years of the same speed I’m sure it will be much longer before I can do that.

Sincerely,
- One disappointed customer

Rails 3, SOAP and Testing, Oh My!

This past week at work I have had the “pleasure” of building out a SOAP endpoint for an internal system. This has caused me to find a wonderful new gem Wash Out (https://github.com/inossidabile/wash_out). With a feature comes with new test and with my first SOAP endpoint comes, how to test SOAP endpoints.

Testing a Wash Out controller wasn’t something that was blaintiantly obvious to me and took some experimenting and discussions with Boris Staal (@_inossidabile). Below is an example of how we settled on testing. This may not be the end all perfect solution but hopefully it will help you get started.

Let’s start with a sample controller, this will give us a base to refer to with our tests.

class API::GasPrices < ApplicationController
  include WashOut::SOAP
  soap_action "GetGasPrices", 
              :args   => {:postal_code => :integer}, 
              :return => :string
  def GetGasPrices
    render :soap => GasPrices.find_all_by_postal_code(params[:postal_code]).to_xml.to_s
  end
end

This controller is a fairly standard example, it has one method GetGasPrices and takes a postal_code as an argument. It returns a string of gas prices.

One of the things that we got caught on was how to actually hit the Wash Out gem and execute the HTTP request. To do that we’ll need to mount our app as a rack app.

We’ll need to make sure that we are using a version of the HTTPI gem that can use a Rack adapter. Right now we need to point our HTTPI gem at the GitHub repo. For the actual testing of making SOAP calls we can use Savon.

gem 'httpi', :git => 'https://github.com/savonrb/httpi.git'
gem 'savon'

Next we’ll need to create a spec file for our tests. For this example let’s use a request spec, even though this is a controller we actually want to make a request with SOAP to make sure our methods are recieving information correctly.

|~spec/
| |+requests/
| | |+API/
| | | |-gas_prices_controller_spec.rb

Let’s setup our spec file now, we’ll need to require Savon and the spec_helper.
Then create a describe block like below.

require 'spec_helper'
require 'savon'

describe API::GasPrices do
  HTTPI.adapter = :rack
  HTTPI::Adapter::Rack.mount 'application', MyGasPriceChecker::Application

  it 'can get gas prices for a postal code' do
    application_base = "http://application"
    client = Savon::Client.new({:wsdl => application_base + api_gas_prices_path })
      result = client.call(:get_gas_prices, :message =&gt; { :postal_code => 54703 })
    result.body[:get_gas_prices_response][:value].should_not be_nil
  end

Inside of our describe block we are using the HTTPI rack adapter, and then configuring that adapter to mount our MyGasPriceChecker as application. This will give us the ability to use the url http://application. In our test we’ll create a new Savon client, this client will need to access our WSDL so it can find the operations it has access too.

Once we have a client created our code can now actually call the GetGasPrices SOAP method. Our test then verifies that the value of our response is not nil, this is really just a starting point and we can iterate going forward to test actual return values.

Controller Testing

Recently I have gotten to work on a greenfield application, this has led to some discussions about the best way to test things. I personally have been taking time to write tests that allow me to take small steps giving me a better sense of direction. These small steps have allowed me to write a test, make it pass, write the next test, make it pass then refactor.

Continuously refactoring my code keeps it clean and maintainable. I’m not going for cleverness or golfing to the lowest number of lines of code. Instead I’m going for code that is flexible and allows me to continue to add new features with ease.

Below is an example of a controller test, written in two different styles. We’ll walk through a small refactoring scenario and see how we can keep our test simple and testing result rather than the implementation by comparing the two styles.

Disclaimer this example has some assumptions such as I am using FactoryGirl and Rspec.

Here is our starting controller, we are going to focus on the edit action. This is pretty strait forward, we’re going to edit an instance of ‘Foo’ so we’ll return it to the view.

class FooController > ApplicationController 
  def edit
   @foo = Foo.find(params[:id])
  end
end

One way we can test this is to build an object with FactoryGirl and then stub out the find method on Foo to return that object. This will allow us to test our edit action insuring that foo is always the same thing.

describe "GET 'edit'" do
  it 'should receive the STUBBED Foo instance' do
    @foo = FactoryGirl.build(:foo)
    Foo.stub(:find).and_return(@foo)
    get :edit, :id => @foo.id
  
    expect(assigns(:foo)).to eql @foo
  end
end

Another way we could test this trivial example is to just use FactoryGirl to create the object rather than just build it. This takes a little less code, but does require two hits to the database, one for saving the record and one for retrieving.

The benefit of this is we are setting up that if we send in the ‘id’ of ‘@foo’ we’ll get back an identical ‘@foo’ from the database.

Our controller test now is just saying hey we expect to get ‘@foo’ back, we really don’t care how you get it but in order for this edit form to work we need this ‘@foo’ back.

describe "GET 'edit'" do
  it 'should receive the NON stubbed Foo instance' do
    @foo = FactoryGirl.create(:foo)
    get :edit, :id => @foo.id
  
    expect(assigns(:foo)).to eql @foo
  end
end

Now let’s refactor our Foo controller to do something a little different. Now we decided that really the ‘@foo’ should be retrieved by sending the ‘@foo.id’ to a method called ‘by_bar’

class FooController < ApplicationController 
  def edit
     @foo = Foo.by_bar(params[:id])
  end
end

Here is our new class method that replaces the normal Foo.find.

class Foo
  def self.by_bar(id)
    Bar.find_by_foo_id(id).foo
  end
end

Now in our first version of the test we are going to get an error because we are not actually creating and saving the object and it is only stubbing out the find method making it brittle and tied to the implementation.

Let’s refactor our tests, first we’ll start with the test using a stub, we’ll need to modify the stub to use the ‘by_bar’ method.

Next we’ll look at the test not using a stub, this one does not need any work. Again we are testing to make sure that the instance of ‘Foo’ we are expecting is returned. In this case it is, so we don’t need to do anything.

describe "GET 'edit'" do
  it 'should receive the STUBBED Foo instance' do
    @foo = FactoryGirl.build(:foo)
    Foo.stub(:by_bar).and_return(@foo)
    get :edit, :id => @foo.id
  
    expect(assigns(:foo)).to eql @foo
  end
  
  it 'should receive the NON stubbed Foo instance' do
    @foo = FactoryGirl.create(:foo)
    get :edit, :id => @foo.id
  
    expect(assigns(:foo)).to eql @foo
  end
end

While looking at our refactoring we realize that we didn’t really need a separate method for our ‘Bar.find_by_foo’ and we can just move that into our controller like so.

class FooController < ApplicationController 
  def edit
     @foo = Bar.find_by_foo_id(id).foo
  end
end

Now our stubbed test breaks again, let’s see what it will take to fix it. We’ll have to change our stub again. This time the stub needs to be done on the ‘Bar’ class and ‘find_by_foo_id’ method. Again our non-stubbed test continues to work because we are still returning an instance of ‘Foo’

describe "GET 'edit'" do
  it 'should receive the STUBBED Foo instance' do
    @foo = FactoryGirl.build(:foo)
    Bar.stub(:find_by_foo_id).and_return(@foo)
    get :edit, :id => @foo.id
  
    expect(assigns(:foo)).to eql @foo
  end
  
  it 'should receive the NON stubbed Foo instance' do
    @foo = FactoryGirl.create(:foo)
    get :edit, :id => @foo.id
  
    expect(assigns(:foo)).to eql @foo
  end
end

As we have seen here taking small steps and limiting our stubs and mocks will save us time when refactoring. In all of these examples the things that broke the test were just breaking test setup code rather than actually failing the test giving us a false positive.

Using Virtualbox for development VMs

If you’re like me sometimes I just find it easier to use a Virtual Machine for doing development work, especially when it is a complex system with many moving parts.  Recently I started to work on a project that I had inherited from another developer.  The project was partially setup on a remote development server.  I wanted to under stand how the pieces went together and be able to pass it along to other co-workers, so I decided to build a VM.

I have been a heavy VMware Fusion user for many years yet not everyone in my office has the luxury of a license for it.  I decided to give Virtualbox a go for this project.

At first I was really happy with it, I got my machine up and running in no time and was working away until I came to setting up the server and realized for the sake of sane hostfile management I wanted it to have a static ip.  I decided to switch the network adapter over to NAT which is where the pain began.

I enjoy just shelling into my machines and working that way, once I switched to NAT I could not just shell to the ip address of the vm.  This seemed odd to me, VMware creates a virtual network for you which apparently Virtualbox does not.  So I went diving and here is what I found:

You’ll need to enable port forwarding to the vm’s NIC. To start open up the network configuration section of your vm.

The Virtualbox network interface menu

After opening the network tab, click on the “Port Forwarding” button under the “Advanced” section.

Then fill out the sections with the relative information.  I found that if you try and put in a Host IP the setup doesn’t work quite right.  Here I’m forwarding my localhost port 2222 to port 22 on the guest vm along with port 8080 to port 80.

Now from my host machine I can use the following shell command to ssh in $ ssh aip@localhost -p 2222 which will use the port-forwarding and let me access my machine.

The same logic applies to trying to view the site running on the vm, in my browser I just have to access http://localhost:8080.

Now this is a bit of work but it is the trade off for a free tool for virtualizing your environments.  It’s working for me right now but your mileage may very.

Pairing on a Severity 1 issue

Recently at work I had to help respond to a Severity 1 issue.  This is our worst case scenario, something major is broke in production and is costing the company money.  In the presentation I gave at Twin Cities Code Camp about pair programming I said that often troubleshooting bugs and fixing production issues weren’t the easiest to pair on.  Reflecting on the past couple days I noticed that while trying to fix the issue at hand we were pairing the whole time.  I don’t think we could have accomplished the fix without the entire team using some of the techniques I had outlined in my presentation.

We used two of my local pairing techniques along with two remote pairing tools. Locally we used a mixture of traditional paring and “Divide and Conquer” pairing.

Traditional

With traditional pairing one person is “driving” while the other person “navigates”. As I sat in the driver seat with vim open and tailing a log on production my manager sat next to me helping navigate through the code as we figured out what needed to be modified. I used to think that I liked to work on these high stress issues alone and troubleshoot things with my own process, but now have a different opinion.  Having someone there to limit the amount of thrashing was a major help.

Divide and Conquer

With a major issue there are lots of logs to check, experiments to try, and pieces of a system to update.  This is where we brought another developer so we could divide up the work and conquer the problem.  While I updated configs, he updated our applet code.  By the time I was done getting the configs ready he had the applet built and ready to be pushed out.  We were in constant contact sitting next to each other but were able to work in parallel to finish the one task.

Remote Tools

Working with a team distributed around the world makes troubleshooting major issues difficult.  I was able to keep everyone on the same page by using a combination of a screen sharing tool http://join.me and tmux.  I used Join.me to share a remote desktop session from London.  From the remote desktop I was able to use putty to connect to a tmux session on my own machine.  This way I could show code and logs to the entire team.  Using tmux allowed me to reboot the machine and pick up and start running very quickly.

Using pairing techniques and tools helped us diagnose the problem and solve it.

TCCC12 – Pair Programming recap

I recently I had the opportunity to share a passion of mine at Twin Cities Code Camp 12.  I gave a presentation, “Pair Programming Techniques”, where I shared my experiences of pair programing including things I found to great and things that are not so grate. This presentation was one of my favorites as it had a great amount of audience participation. I will attempt to share this information in a series of blog posts.

Pair programming is the act of working on a focused task with someone.  I’ve seen it be a productivity booster, we’ve all had times where we just hit the preverbal wall while working on something.  I always found that explaining my situation to someone else would help me get over this block and get back to work.  Imagine having that person right there working on the task with you. Sure I’ve been in situations where both of us in a pair got stuck but that is a rarity.

While pairing I tend to notice that I am getting a real time review of the code I am writing.  I like being able to discuss code while it is written, giving me a chance to learn and teach all while getting things done.

Pair programming is not the silver bullet, it’s not going to solve all your problems or make your team’s velocity jump a hundred and twenty percent.  It’s not just for twenty something hot shot developers, working at the latest startup.  Even though you have two eyes working on the codebase doesn’t mean bugs are going to get through. Pair programming is like the old saying “measure twice and cut once”, which doesn’t mean you are going to cut every board perfect but it it limits your possibility of a defect.

In todays work place it is common to have people in the office along with remote employees.  Just because someone works remotely doesn’t mean that they can’t pair and have to work alone.  There are advantages and disadvantages to both situations.

Pairing locally has many advantages, most of which have to do with being in a physical environment. When you are working locally you have the ability to better read your pairs body language, you can grab some paper and sketch out some ideas.

Sometimes pairing locally can have some distinct disadvantages too.  There are big things like sickness, working in a very close environment makes it easy to pass things to each other.

Working remotely solves the sickness issue and allows both people to pair and not worry when they have a simple cold.  I have found myself enjoying remote pairing in this situation.  While you don’t have some of the physical amenities that you would in a normal office setting, video chat and screen sharing comes pretty close. I have spent time working remotely at previous jobs and have found that human interaction is something that I need. Pairing remotely gives you someone to talk with, share your ideas and get feedback from.

While pairing remotely is great during the height of flu season, and with video chat and screen sharing you can almost see everything there is still a great deal of communication that needs to take place for the pair to be effective. Bandwidth is a major downfall of remote pairing, I live in a location with a slower DSL connection and I have to prioritize the tools I use when remote pairing.

Look for follow up blog posts about different styles and tools for paring locally and remotely.  I’ll cover some of the techniques I have learned and developed since I started pairing.