Rails on Maui

Programming in Paradise

React on Rails Tutorial

In response to a recent client request for a richer browser side UI experience, I took a fresh look at all the recent advances in JavaScript rich client frameworks. The React library stood out as unique, innovative, and impressive.

The main reasons that I like React are:

  1. It’s a better abstraction than MVC!
  2. React keeps track of what needs to change in the DOM with its virtual DOM model.
  3. All the view rendering code can assume that nothing changes during the rendering process as components recursively call render(). This makes reasoning about the rendering code much simpler.
  4. The simpler conceptual model of always rendering the entire UI from a given state is akin to the server side rendering of HTML pages, that Rails programmers are more familiar with.
  5. The documentation is very good, and it’s got significant traction.

Given that React is just about the View part of the client UI, or more specifically, view components, it seems especially suitable for integration into the Rails ecosystem to help build better rich JavaScript UIs. The React website contains a simple tutorial utilizing Node for the backend. Suppose you want to use Rails for the backend?

This following instructions walk you through the steps to build the original simple tutorial with a Rails 4.2 backend utilizing the react-rails gem. With the Rails scaffold generator, very little Rails coding is required. You can try the end result of the completed tutorial on Heroku, and the code on Github.

Since the original React tutorial is excellent, I will not be rehashing any of it’s explanations of how React works. This tutorial purely focusing on converting that tutorial to utilize Rails.

Besides carefully studying the original tutorial, I recommend:

  1. Watching these 2 videos for an introduction to React’s virtual DOM model. a. This video explains design philosophy of React and why MVC is not the right model for building UIs. b. This video compares ReactJs vs. Key Value Observation(EmberJs) and Dirty Checking (AngularJS).
  2. Play with the examples on the React overview page. Don’t just read the examples. You can play with the code right on that page!
  3. Read the docs, which I found fairly interesting.

Useful React Links

  1. Completed React-Rails tutorial Live on Heroku: Tutorial Live on Heroku.
  2. Rails 4.2, React, completed tutorial: Github repo for completed tutorial.
  3. React: A Javascript Library For Building User Interfaces: Main website for React.
  4. React Tutorial: The Node basis for this tutorial.
  5. reactjs/react-tutorial: Github repo for official Node based tutorial.

Tutorial Step by Step

Create a brand new Rails 4.2 Application

  1. Install Ruby 2.1.2 or whichever recent Ruby you prefer. I use rvm.
  2. Install Rails gem
    1
    
    gem install rails --pre
    

    NOTE: There is a bug if you RubyGems versions newer than 2.2.2. This detailed in this question on Stack Overflow.

  3. Create the Rails app
    1
    
    rails new react-rails-tutorial
    
  4. cd react-rails-tutorial
  5. Create .ruby-version and .ruby-gemset per your preferences inside react-rails-tutorial directory.
  6. Run bundler
    1
    
    bundle install
    
  7. Create new git repository
    1
    
    git init .
    
  8. Add and commit all files:
    1
    
    git add . && git commit -m "rails new react-rails-tutorial"
    

Create Base Rails App Scaffolding for Comment model

  1. Run generator. Be sure to use the exact names below to match the React tutorial.
    1
    
    rails generate scaffold Comment author:string text:text
    
  2. Migrate the database
    1
    
    rake db:migrate
    
  3. Commit
    1
    
    git add . && git commit -m "Ran rails generate scaffold Comment author:string text:text and rake db:migrate"
    

Create Page for App

  1. Run the controller generator
    1
    
    rails generate controller Pages index
    
  2. Fix your config/routes.rb to go to the home page, by changing
    1
    
    get 'pages/index'
    

    to

    1
    
    root 'pages#index'
    

Try Out the New Rails App

  1. Start the server
    1
    
    rails server
    
  2. Open your browser to http://0.0.0.0:3000 and see the your blank home page.
  3. Open your browser to http://0.0.0.0:3000/comments and see the comments display.
  4. Add a comment. Click around. Neat!

  5. Test out the json API, automatically created by Rails:
    1
    
    curl 0.0.0.0:3000/comments.json
    

    and see

    [{"id":1,"author":"Justin","text":"My first comment.","url":"http://0.0.0.0:3000/comments/1.json"}]%
    
  6. View your routes
    > rake routes                                                                                                                                    ✹ ✭ [19:44:29]
             Prefix Verb   URI Pattern                  Controller#Action
               root GET    /                            pages#index
           comments GET    /comments(.:format)          comments#index
                    POST   /comments(.:format)          comments#create
        new_comment GET    /comments/new(.:format)      comments#new
       edit_comment GET    /comments/:id/edit(.:format) comments#edit
            comment GET    /comments/:id(.:format)      comments#show
                    PATCH  /comments/:id(.:format)      comments#update
                    PUT    /comments/:id(.:format)      comments#update
                    DELETE /comments/:id(.:format)      comments#destroy
    
  7. If all that worked, then commit your changes
    1
    
    git add . && git commit -m "Ran rails generate scaffold Comment author:string text:text and rake db:migrate"
    

React Tutorial Using Node

This is what we’ll be converting to Rails 4.2.

  1. Create a new branch, in case we want to test the same design with AngularJS or EmberJS:
    1
    
    git checkout -b "react"
    
  2. Take a look at the React Tutorial and the github repo: reactjs/react-tutorial.
  3. Open up a new shell window. Pick a directory and then do
    1
    
    git clone git@github.com:reactjs/react-tutorial.git
    
  4. cd to the react-tutorial.git directory and open up the source code.
  5. Optionally run the tutorial example per the instructions on the README.md

Adding React to Rails

  1. We’ll be using the reactjs/react-rails gem. Plus we’ll need to include the showdown markdown parser, using the showdown-rails gem. Add these lines to your Gemfile and run bundle
    1
    2
    
    gem 'react-rails', github: 'reactjs/react-rails', branch: 'master'
    gem 'showdown-rails'
    

    Note, I’m using the tip of react-rails. Depending on when you try this tutorial, you may not wish to be using the tip, and don’t do that for a production application!

  2. Per the gem instructions, let’s add the js assets below the turbolinks reference in app/assets/javascripts/application.js
    1
    2
    3
    4
    5
    6
    
    //= require jquery
    //= require jquery_ujs
    //= require turbolinks
    //= require showdown
    //= require react
    //= require_tree .
    
  3. Once you verify that you can load 0.0.0.0:3000 in your browser, then commit the files to git:
    1
    
    git commit -am "Added react-rails and showdown-rails gems"
    

Move Tutorial Parts to Rails Application

Now the fun starts. Let’s take the parts out of the node tutorial and put them into the Rails app.

  1. Copy the necessary line from react-tutorial/index.html to replace the contents of app/views/pages/index.html.erb. You’ll just have one line there:
    1
    
    <div id="content"></div>
    
  2. Now, the meat of the tutorial, the JS code. Copy the entire contents of react-tutorial/scripts/example.js into app/assets/javascripts/comments.js.jsx (Renamed from comments.js.coffee).
  3. Commit the added files, so we can see what we change from the original versions.
    1
    
    git commit -am "index.html.erb and comments.js.jsx added"
    
  4. Start the rails server (rails s). Visit 0.0.0.0:3000. Nothing shows up!

Tweak the Tutorial

In the example, the call to load example.js comes after the declaration of the DOM element with id “content”. So let’s run the renderComponent after the DOM loads. Wrap the React.renderComponent call at the bottom of comments.js.jsx like this:

1
2
3
4
5
6
$(function() {
  React.renderComponent(
    <CommentBox url="comments.json" pollInterval={2000} />,
    document.getElementById('content')
  );
})

Let’s commit that diff: =git commit -am “React component loads”=

Then copy the css from react-tutorial/css/base.css over to app/assets/stylesheets/comments.css.scsss

The styling in is not quite right.

Add bootstrap-sass Gem

  1. Add the gems
    1
    2
    
    gem 'bootstrap-sass'
    gem 'autoprefixer-rails'
    
  2. Run bundle install
  3. Rename app/assets/stylesheets/application.css to application.css.scss and change it to the following:
    1
    2
    
    @import "bootstrap-sprockets";
    @import "bootstrap";
    
  4. Optionally, add this line to app/assets/javascripts/application.js
    1
    
    //= require bootstrap-sprockets
    
  5. Restart the application. Notice that there is no padding to the left edge of the browser window. That’s an easy fix. Let’s put the content div inside a container, by changing app/views/pages/index.html.erb to this:
    1
    2
    3
    
    <div class="container">
      <div id="content"></div>
    </div>
    
  6. Let’s spruce up the data entry part. Take a look at the Boostrap docs for CSS Forms. You’ll have to refer to the diffs on github for this change. Or you can take creative license here!

Adding Records Fails

The first issue is that we’re not submitting the JSON correctly to add new records.

Started POST "/comments.json" for 127.0.0.1 at 2014-08-22 21:48:55 -1000
Processing by CommentsController#create as JSON
  Parameters: {"author"=>"JG", "text"=>"Another **comment**"}
Completed 400 Bad Request in 1ms

ActionController::ParameterMissing (param is missing or the value is empty: comment):
  app/controllers/comments_controller.rb:72:in `comment_params'
  app/controllers/comments_controller.rb:27:in `create'

If you look at this method in comments_controller.rb, you can see the issue:

1
2
3
def comment_params
  params.require(:comment).permit(:author, :text)
end

The fix to this is to wrap the params in “comment”, by changing this line in comments.jsx.js, in function handleCommentSubmit.

1
data: comment,

to

1
data: { comment: comment },

Here’s a enlarged view of that diff from RubyMine.

After that change, we can observe this in the console when adding a new record:

Started POST "/comments.json" for 127.0.0.1 at 2014-08-22 21:55:18 -1000
Processing by CommentsController#create as JSON
  Parameters: {"comment"=>{"author"=>"JG", "text"=>"Another **comment**"}}
   (0.1ms)  begin transaction
  SQL (0.7ms)  INSERT INTO "comments" ("author", "created_at", "text", "updated_at") VALUES (?, ?, ?, ?)  [["author", "JG"], ["created_at", "2014-08-23 07:55:18.234473"], ["text", "Another **comment**"], ["updated_at", "2014-08-23 07:55:18.234473"]]
   (3.0ms)  commit transaction
  Rendered comments/show.json.jbuilder (0.7ms)
Completed 201 Created in 22ms (Views: 5.0ms | ActiveRecord: 3.9ms)

When Visiting Other Pages in the App

If you go to the url 0.0.0.0:3000/comments and look at browser console, you’ll see an error due the page load script looking for a component of id content that doesn’t exist. Let’s fix that by checking that the DIV with id content exists before calling React.renderComponent.

1
2
3
4
5
6
7
8
9
$(function() {
  var $content = $("#content");
  if ($content.length > 0) {
    React.renderComponent(
      <CommentBox url="comments.json" pollInterval={2000} />,
      document.getElementById('content')
    );
  }
})

Deploying to Heroku

It’s necessary to make a couple changes to the Gemfile. Use pg in production and add the rails_12factor gem.

1
2
3
4
gem 'sqlite3', group: :development
gem 'pg', group: :production

gem 'rails_12factor'

Turbolinks

If you’re going to have other pages in the application, it’s necessary to change when React.renderComponent is called, switching from document “ready” event to to the document “page:change” event. You can find more details at the Turbolinks Gem repo.

1
2
3
4
5
6
7
8
9
$(document).on("page:change", function() {
  var $content = $("#content");
  if ($content.length > 0) {
    React.renderComponent(
      <CommentBox url="comments.json" pollInterval={2000} />,
      document.getElementById('content')
    );
  }
})

Golden Gate Ruby Conference (GoGaRuCo) Pictures 2014

I took lots of great pictures at the 2014 Golden Gate Ruby Conference this year.

Overall, the conference was awesome. All the speakers seemed incredibly well prepared.

In case you haven’t heard, this was the last GoGaRuCo conference. Why? I heard that the costs for the facility are going up, especially the costs for catering. I also suspect that other new conferences, such as Ember Conf, are competing for attention. And certainly it’s been a huge undertaking for the conference organizers.

I’ve been toying around with creating a Rails on Maui Conference, and I’ve just created a forum for just this sort of discussion.

Should we have a Maui Rails Conference? Let’s discuss the possibility of such a conference here. I’d need at least several committed co-organizers in order for this to become a reality. A possible date would be next September, 2015, given that GoGaRuCo will no longer take place.

I’d propose having a smaller, less formal conference for the first year. I’ve got a very reasonably priced venue in mind that could take up to 100 participants.

Ideas? Want to help?

I’ve broken the pictures up into smaller sets of the best pictures which I’ve placed in Facebook albums. Then I’ve got the complete sets of images posted to Flickr.

If you need any full resolution, non-watermarked images, please get in touch with me.

Storing or Excluding Node Modules in Rails Git Repositories

It was and probably still is fashionable in the node community to check the dependencies into one’s git repository, and it may still be the case, per the following links. However, Rubyists use bundler, and I’ve never heard of checking gem dependencies into a Ruby project. So what do we do when we have Node dependencies in a Rails project?

Reasons to include node_modules in git

  1. Stack Overflow on why you should check node_modules into git and not have node_modules in your .gitignore.
  2. Mikeal Rogers’ post on this. Note, this post was from 2011. He says:

    Why can’t I just use version locking to ensure that all deployments get the same dependencies?

    Version locking can only lock the version of a top level dependency. You lock your version of express to a particular version and you deploy to a new machine 3 weeks later it’s going to resolve express’s dependencies again and it might get a new version of Connect that introduces subtle differences that break your app in super annoying and hard to debug ways because it only ever happens when requests hit that machine. This is a nightmare, don’t do it.

    and concludes with:

    All you people who added node_modules to your gitignore, remove that shit, today, it’s an artifact of an era we’re all too happy to leave behind. The era of global modules is dead.”

    And so this was all true, but before node-shrinkwrap was released (see below)!

  3. The Node FAQ clearly states:
    1. Check node_modules into git for things you deploy, such as websites and apps.
    2. Use npm to manage dependencies in your dev environment, but not in your deployment scripts.

Reasons not to include node_modules in git

Including node_modules in your git repo greatly increases the potential file churn for files that your team did not create, thus making pull requests on github problematic due to large numbers of files with changes.

One problem with npm install is that while your package.json file may be locking down your dependency versions, it does not lock down your dependencies’ dependencies!

Instead, one can use npm-shrinkwrap to lock down all the dependencies, per this answer for Should “node-modules” folder be included in the git repository. It’s worth noting that supposedly Heroku will use npm-shrinkwrap.json, per this answer on Stack Overflow. Probably the best documentation for this is in the npm-install man page.

Conclusion

Consequently, I’m going with the approach not including node_modules in my git repository by:

  1. Using npm-shrinkwrap.
  2. Placing node_modules in my project specific .gitignore.

I’ll do this for my projects until I’m convinced of otherwise!

Updating My Blog to Octopress With Jekyll 2 and Discourse for Comments

This weekend I made the ambitious move to using Discourse.org for my blog and also upgrading Octopress to the latest version which supports Jekyll 2.0. Here’s my notes, so that you can evaluate if you want to do either of these, as well as how to do this efficiently.

Motivation

What motivated me to update Octopress? The main reason was that Octopress finally got upgraded from a beta version of Jekyll to Jekyll 2.x.

What motivated me to migrate comments to Discourse?

  1. I already wanted to create a forum for my website, so integrating blog comments seemed worth pursuing. This is what BoingBoing uses for its blog articles. Click on the “Discuss” link below any BoingBoing article and get taken to the Discourse topic for that article.
  2. I wanted to be able to have more engaging conversations with my programmer friends on the topics which I’m blogging about.

What’s super cool about doing the conversion?

  1. Discourse will automatically create topics for each of your blog posts. You can see that here: http://forum.railsonmaui.com/category/blog
  2. Discourse can import the Disqus comments from your blog!

    What this looks like on the blog, http://www.railsonmaui.com

    What this looks like on the forum, http://forum.railsonmaui.com:

Updating Octopress

Googling for upgrading octopress gave me my own article as the second match. It’s always a great reason to blog and have your notes indexed by Google!

I ran into one difficult issue with the upgrade. The issue was the very frustrating:

bin/ruby_executable_hooks:15: stack level too deep (SystemStackError)

How did I solve the problem?

Naturally the first thing to do is to google the error message. That was not particularly helpful.

Since I assumed that this problem would be pretty specific to my Octopress site, I guessed that the issue was related to a rogue Jekyll plugin.

I moved all my plugins that were not part of standard Octopress into a /plugins_removed directory, and then added back my plugins one at a time. That helped me narrow down the issue to the jekyll_alias_generator plugin, which sets up redirects when you change the URL of a published blog articles.

Then I clicked on the Issues icon for the jekyll_alias_generator and searched for stack level too deep and BINGO!

And here’s the solution: Stack level too deep error #14, which is to replace lines 73-75 in alias_generator.rb with this code:

1
2
3
(alias_index_path.split('/').size).times do |sections|
    @site.static_files << Jekyll::AliasFile.new(@site, @site.dest, alias_index_path.split('/')[1, sections + 1].join('/'), '')
end

Another issue I hit was that I had a few template files that were using

layout: nil

This results in errors like:

Build Warning: Layout 'nil' requested in atom.xml does not exist.

This got changed in the recent version of Jekyll to use null, like this:

layout: null

So grep your files for layout: nil and change those to layout: null.

Installing Discourse for Blog Comments

This is well described in the following articles. I’ll give you my specific steps below.

  1. Setting up discourse on Docker: github: discourse/docker and discourse/docs/INSTALL-digital-ocean.md. You can probably do fine on a $10/month plan. The trickiest parts is to be sure that you do every step very carefully. It’s very easy to make one typo and to then slow the process down!
  2. Embedding Discourse in Static Sites is the primary source of information on converting from Disqus to Discourse for your blog comments.
  3. Discourse plugin for static site generators like Jekyll or Octopress: Specifics for Octopress and Jekyll.

Once you configure your Discourse site to import your blog articles, you’ll have to wait a bit for the rake task to run. It’s great being able to kickstart the content of the forum with one’s blog articles!

Discourse Configuration

The configuration of Discourse for blogging is super easy.

  1. Configure the following settings, taking note that:
    1. The urls are to your blog and include the subdomain, like www.railsonmaui.com.
    2. The embeddable host does not include http://
    3. The feed polling URL does include http://

  2. I added a category called “Blog”.
  3. I created a user called “disqus” for users not found in the Disqus comment import.

Octopress Discourse Comments Setup

  1. Remove or comment out your Disqus setup in your /_config.yml file:
    1
    2
    3
    4
    
    # Disqus Comments
    # Removed as support for Discourse comments added
    # disqus_short_name: railsonmaui
    # disqus_show_comment_count: true
    

    Note, I first commented it out, because I toggled this on and off as I was ensuring that the comment migration worked correctly, and none were missed.

  2. Add the plugin contained in discourse_comments.rb to your /plugins directory. This plugin will append a DIV to your posts like this:
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    
    <div id="discourse-comments"></div>
    <script type="text/javascript">
      var discourseUrl = "#{@site.config['discourse_url']}",
          discourseEmbedUrl = "#{@site.config['url']}#{@site.config['baseurl']}#{url}";
    
      (function() {
        var d = document.createElement('script'); d.type = 'text/javascript'; d.async = true;
        d.src = discourseUrl + 'javascripts/embed.js';
        (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(d);
        })();
    </script>
    
  3. Note that the display of comments only works on your live website, due the fact that the Discourse server checks the source of the request for the comments (per the above image of the configuration).

Detailed instructions for importing your Disqus comments into Discourse

The following instructions will allow you to import the comments from Disqus, along with creating associated users for those comments. This is a GREAT way to kickstart the forum.

  1. Download an XML backup of your Disqus comments by logging into your Disqus dashboard. The URL is like https://youraccount.disqus.com/admin/discussions/.
  2. That should bring you to the Discussions tab. Then click the Export sub-tab. It should look like this: You’ll have to wait a few minutes for the creation email. I then saved the file to my ~/Downloads directory.
  3. Ssh to your docker instance
    1
    
    ssh root@XXX.XXX.XXX.XXX
    
  4. Get into your docker instance.
    root@forum:~# cd /var/discourse/
    root@forum:/var/discourse# ./launcher ssh app
    

    You’ll see this message:

    Welcome to Discourse Docker
    Use: rails, rake or discourse to execute commands in production
    
  5. Sudo to discourse:
    root@forum:~# sudo -iu discourse
    discourse@forum:~$ cd /var/www/discourse
    discourse@forum:/var/www/discourse$ bundle exec thor list
    
  6. Then you need to copy the XML file you downloaded from Disqus that contains an archive of your comments. The easiest way to do this is to scp the file from some place accessible on the Internet. What I did was to scp the file from my local machine to my Digital Ocean machine, and then from my Digital Ocean machine to the Docker container. Here’s an example:

    On your local machine, with the XML file (XXX.XXX.XXX.XXX is the ip of your droplet):

    1
    
    scp ~/Downloads/railsonmaui-disqus.xml root@XXX.XXX.XXX.XXX
    

    Then inside of your docker container:

    discourse@forum:/var/www/discourse$ scp root@XXX.XXX.XXX.XXX:railsonmaui-disqus.xml .
    

    That puts the file railsonmaui-disqus.xml in the current directory.

  7. Run the thor command:
    discourse@forum:/var/www/discourse$ bundle exec thor disqus:import --file=railsonmaui-disqus.xml --post-as=disqus --dry-run
    /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/activerecord-4.1.6/lib/active_record/connection_adapters/postgresql_adapter.rb:898:in `rescue in connect': FATAL:  database "discourse_development" does not exist (ActiveRecord::NoDatabaseError)
    Run `$ bin/rake db:create db:migrate` to create your database
      from /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/activerecord-4.1.6/lib/active_record/connection_adapters/postgresql_adapter.rb:888:in `connect'
    

    The problem is that we need to specify the environment, as is standard with Rails apps:

    1
    
    RAILS_ENV=production bundle exec thor disqus:import --file=railsonmaui-disqus.xml --post-as=disqus --dry-run
    

    That command does the trick and gives you a nice message indicating what it will do once you remove the --dry-run flag.

    discourse@forum:/var/www/discourse$ RAILS_ENV=production bundle exec thor disqus:import --file=railsonmaui-disqus.xml --post-as=disqus --dry-run
    Creating Favorite RubyMine Tips - Rails on Maui... (8 posts)
    Creating Octopress Setup with Github, Org Mode, and LiveReload - Rails on Maui... (3 posts)
    

    Once you verify, run:

    1
    
    RAILS_ENV=production bundle exec thor disqus:import --file=railsonmaui-disqus.xml --post-as=disqus
    

    This creates the comments and the users. Creating the users surprised me as I didn’t know that the Disqus export contained the users’ email addresses. So this script ends up triggering activation emails to all those users!

Conclusion

This is all pretty neat! Not only did I get my new forum populated with some content, but I also created users who commented on my posts in the past. I’m hoping I can engage in more meaningful discussions regarding the technologies that I blog about with my own forum. Please do sign-up for the forum so you can comment and receive periodic updates of what gets posted! Or just sign up when you want to post a comment. :-)

Pry, Ruby, Array#zip, CSV, and the Hash[] Constructor

A couple weeks ago, I wrote a popular article, Pry, Ruby, and Fun With the Hash Constructor demonstrating the usefulness of pry with the Hash bracket constructor. I just ran into a super fun test example of pry that I couldn’t resist sharing!

The Task: Convert CSV File without Headers to Array of Hashes

For example, you want to take a csv file like:

|---+--------+--------|
| 1 | Justin | Gordon |
| 2 | Tender | Love   |
|---+--------+--------|

And create an array of hashes like this with column headers “id”, “first_name”, “last_name”:

1
2
3
4
5
6
7
8
9
10
11
12
[
    [0] {
               "id," => "1",
        "first_name" => "Justin",
         "last_name" => "Gordon"
    },
    [1] {
               "id," => "2",
        "first_name" => "Tender",
         "last_name" => "Love"
    }
]

You’d think that you could just pass the headers to the CSV.parse, but that doesn’t work:

1
2
3
4
5
6
7
8
[11] (pry) main: 0> col_headers = %w(id, first_name last_name)
[
    [0] "id,",
    [1] "first_name",
    [2] "last_name"
]
[12] (pry) main: 0> csv = CSV.parse(csv_string, headers: col_headers)
(pry) output error: #<NoMethodError: undefined method `table' for #<Object:0x007fdbfc8d5588>>

Using Array#zip

I stumbled upon a note about the CSV parser that suggested using Array#zip to add keys to the results created by the CSV parser when headers don’t exist in the file.

Using Array#zip? What the heck is the zip method? Compression?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
[1] (pry) main: 0> ? a_array.zip

From: array.c (C Method):
Owner: Array
Visibility: public
Signature: zip(*arg1)
Number of lines: 17

Converts any arguments to arrays, then merges elements of self with
corresponding elements from each argument.

This generates a sequence of ary.size _n_-element arrays,
where _n_ is one more than the count of arguments.

If the size of any argument is less than the size of the initial array,
nil values are supplied.

If a block is given, it is invoked for each output array, otherwise an
array of arrays is returned.

   a = [ 4, 5, 6 ]
   b = [ 7, 8, 9 ]
   [1, 2, 3].zip(a, b)   #=> [[1, 4, 7], [2, 5, 8], [3, 6, 9]]
   [1, 2].zip(a, b)      #=> [[1, 4, 7], [2, 5, 8]]
   a.zip([1, 2], [8])    #=> [[4, 1, 8], [5, 2, nil], [6, nil, nil]]

Hmmmm….Why would that be useful?

Here’s some pry command that demonstrate this. I encourage you to follow along in pry!

I first created a CSV string from hand like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[2] (pry) main: 0> csv_file = <<-CSV
[2] (pry) main: 0* 1, "Justin", "Gordon"
[2] (pry) main: 0* 2, "Avdi", "Grimm"
[2] (pry) main: 0* CSV
"1, \"Justin\", \"Gordon\"\n2, \"Avdi\", \"Grimm\"\n"
[3] (pry) main: 0> CSV.parse(csv_file) { |csv_row| p csv_row }
CSV::MalformedCSVError: Illegal quoting in line 1.
from /Users/justin/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/csv.rb:1855:in `block (2 levels) in shift'
[2] (pry) main: 0* 1, "Justin", "Gordon"
[2] (pry) main: 0* 2, "Avdi", "Grimm"
[2] (pry) main: 0* CSV
"1, \"Justin\", \"Gordon\"\n2, \"Avdi\", \"Grimm\"\n"
[3] (pry) main: 0> CSV.parse(csv_file) { |csv_row| p csv_row }
CSV::MalformedCSVError: Illegal quoting in line 1.
from /Users/justin/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/csv.rb:1855:in `block (2 levels) in shift'

Doooh!!!! That taught me that creating a legit CSV string is not as easy as it sounds.

Let’s create a legit csv string:

1
2
3
4
5
[4] (pry) main: 0> csv_string = CSV.generate do |csv|
[4] (pry) main: 0*   csv << [1, "Justin", "Gordon"]
[4] (pry) main: 0*   csv << [2, "Tender", "Love"]
[4] (pry) main: 0* end
"1,Justin,Gordon\n2,Tender,Love\n"

Notice, there’s no quotes around the single word names!

If I use CSV to parse this, we get the reverse result, the array of arrays, back:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[16] (pry) main: 0> CSV.parse(csv_string)
[
    [0] [
        [0] "1",
        [1] "Justin",
        [2] "Gordon"
    ],
    [1] [
        [0] "2",
        [1] "Tender",
        [2] "Love"
    ]
]
[17] (pry) main: 0> CSV.parse(csv_string).class
Array < Object

Ahh…Could we use the Hash[] constructor to convert these arrays into Hashes that place the proper keys?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
[18] (pry) main: 0> first_row = CSV.parse(csv_string).first
[
    [0] "1",
    [1] "Justin",
    [2] "Gordon"
]
[19] (pry) main: 0> col_headers = %w(id, first_name last_name)
[
    [0] "id,",
    [1] "first_name",
    [2] "last_name"
]
[20] (pry) main: 0> first_row.zip(col_headers)
[
    [0] [
        [0] "1",
        [1] "id,"
    ],
    [1] [
        [0] "Justin",
        [1] "first_name"
    ],
    [2] [
        [0] "Gordon",
        [1] "last_name"
    ]
]
[21] (pry) main: 0> Hash[ first_row.zip(col_headers) ]
{
         "1" => "id,",
    "Justin" => "first_name",
    "Gordon" => "last_name"
}

Bingo!

Now, let’s fix the array of arrays, creating an array called rows

1
2
3
4
5
6
7
8
9
10
11
12
13
[22] (pry) main: 0> rows = CSV.parse(csv_string)
[
    [0] [
        [0] "1",
        [1] "Justin",
        [2] "Gordon"
    ],
    [1] [
        [0] "2",
        [1] "Tender",
        [2] "Love"
    ]
]

Then the grand finale!

1
2
3
4
5
6
7
8
9
10
11
12
13
[24] (pry) main: 0> rows.map { |row| Hash[ col_headers.zip(row) ] }
[
    [0] {
               "id," => "1",
        "first_name" => "Justin",
         "last_name" => "Gordon"
    },
    [1] {
               "id," => "2",
        "first_name" => "Tender",
         "last_name" => "Love"
    }
]

And sure, you can do this all on one line by inlining the rows variable:

1
CSV.parse(csv_string).map { |row| Hash[ col_headers.zip(row) ] }

Using headers option in CSV?

Well, you’d think that you could just pass the headers to the CSV.parse, but that doesn’t work:

1
2
[12] (pry) main: 0> csv = CSV.parse(csv_string, headers: col_headers)
(pry) output error: #<NoMethodError: undefined method `table' for #<Object:0x007fdbfc8d5588>>

Well, what’s the doc?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[13] (pry) main: 0> ? CSV.parse

From: /Users/justin/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/csv.rb @ line 1278:
Owner: #<Class:CSV>
Visibility: public
Signature: parse(*args, &block)
Number of lines: 11

:call-seq:
  parse( str, options = Hash.new ) { |row| ... }
  parse( str, options = Hash.new )

This method can be used to easily parse CSV out of a String.  You may either
provide a block which will be called with each row of the String in turn,
or just use the returned Array of Arrays (when no block is given).

You pass your str to read from, and an optional options Hash containing
anything CSV::new() understands.

Hmmm…seems that passing the headers should have worked.

The CSV docs clearly state that the initialize method takes an option :headers

:headers If set to :first_row or true, the initial row of the CSV file will be treated as a row of headers. If set to an Array, the contents will be used as the headers. If set to a String, the String is run through a call of ::parse_line with the same :col_sep, :row_sep, and :quote_char as this instance to produce an Array of headers. This setting causes #shift to return rows as CSV::Row objects instead of Arrays and #read to return CSV::Table objects instead of an Array of Arrays.

So, what can we call on a new CSV object? Let’s list the methods.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[25] (pry) main: 0> ls CSV.new(csv_string, headers: col_headers)
Enumerable#methods:
  all?            count       each_entry        find        group_by  map      minmax     reject        sum         to_table
  any?            cycle       each_slice        find_all    include?  max      minmax_by  reverse_each  take        to_text_table
  as_json         detect      each_with_index   find_index  index_by  max_by   none?      select        take_while  zip
  chunk           drop        each_with_object  first       inject    member?  one?       slice_before  to_a
  collect         drop_while  entries           flat_map    lazy      min      partition  sort          to_h
  collect_concat  each_cons   exclude?          grep        many?     min_by   reduce     sort_by       to_set
CSV#methods:
  <<           col_sep            fcntl             header_convert     lineno      readline         skip_blanks?  to_io
  add_row      convert            field_size_limit  header_converters  path        readlines        skip_lines    truncate
  binmode      converters         fileno            header_row?        pid         reopen           stat          tty?
  binmode?     each               flock             headers            pos         return_headers?  string        unconverted_fields?
  close        encoding           flush             inspect            pos=        rewind           sync          write_headers?
  close_read   eof                force_quotes?     internal_encoding  puts        row_sep          sync=
  close_write  eof?               fsync             ioctl              quote_char  seek             tell
  closed?      external_encoding  gets              isatty             read        shift            to_i
instance variables:
  @col_sep     @field_size_limit   @headers  @parsers     @re_chars        @row_sep      @unconverted_fields
  @converters  @force_quotes       @io       @quote       @re_esc          @skip_blanks  @use_headers
  @encoding    @header_converters  @lineno   @quote_char  @return_headers  @skip_lines   @write_headers

How about this:

1
2
3
4
5
[14] (pry) main: 0> csv = CSV.new(csv_string, headers: col_headers).to_a
[
    [0] #<CSV::Row "id,":"1" "first_name":"Justin" "last_name":"Gordon">,
    [1] #<CSV::Row "id,":"2" "first_name":"Tender" "last_name":"Love">
]

Well, that’s getting closer.

How about if I just map those rows with a to_hash?

1
2
3
4
5
6
7
8
9
10
11
12
13
[16] (pry) main: 0> csv = CSV.new(csv_string, headers: col_headers).map(&:to_hash)
[
    [0] {
               "id," => "1",
        "first_name" => "Justin",
         "last_name" => "Gordon"
    },
    [1] {
               "id," => "2",
        "first_name" => "Tender",
         "last_name" => "Love"
    }
]

Bingo!

I hope you enjoyed this!

Rails Gem Upgrading Tips and Strategies

What are the best-practices for upgrading gems to newer versions? What sort of tips and techniques can save time and headaches?

I built this guide based on my real-world experiences over years of gem migrations, including a recent upgrade to Rails 4.1, RSpec 3.0, and Twitter Bootstrap 3.2. There are some more specific examples of errors you might encounter at this article on the Rails on Maui blog: Specific Issues Upgrading Gems to Rails 4.1, RSpec 3, and Twitter Bootstrap 3.2.

Why Update?

Here’s my favorite reasons for keeping gems relatively current:

  1. If you work on several projects, keeping the gems and ruby version consistent makes your coding more productive as you don’t have to keep adjusting for which version is which. Web searches tend to find relatively recent versions first. It’s relatively annoying to be yak shaving issues that turn out to be “oh, that doesn’t work in that older version of Rails”.
  2. Recent versions of gems will have fixes for bugs and security issues, in addition to new features. With popular open source projects, new bugs are quickly discovered and fixed.
  3. Updates are much easier if you stay relatively current. I.e., it’s much easier to update from Rails 4.0 to Rails 4.1 than to go from Rails 3.0 to Rails 4.1.

That being said, recent versions can have new bugs, so it’s best to avoid versions that are unreleased or that haven’t aged at least a few weeks.

Some Gems Will Be Way More Difficult to Update

Large libraries, like Rails, RSpec, Twitter Bootstrap, etc. are going to take more elbow grease to update. Typically if a major version number is updating, like Rails 3.x to 4.x and RSpec 2.x to 3.x, that’s going to require lots of code changes. Semantic versioning also comes into play. Going from Rails 3.x to Rails 4.x is more difficult than Rails 4.0 to Rails 4.1. There’s a similar story with RSpec 2.x to 2.99, compared to going to RSpec 3.0.

Techniques for Smoother Gem Upgrades

Locking Gem Versions

Unless you have a good reason, don’t lock a gem to a specific version as that makes updating gems more difficult. In general, consider only locking the major Rails gems, such as rails, RSpec, and bootstrap-sass, as these are the ones that will likely have more involved upgrades.

Don’t Upgrade Major Libraries Too Soon

3 Reasons to wait a bit before gem updates:

  1. Dependencies among gem libraries are not yet resolved. I had tried upgrading to RSpec 3 and Rails 4.1 a couple months ago, but it was apparent that I had to fix to many other gems to get them to work with RSpec 3. Thus, I retreated back to RSpec 2.99 for a while. Now, as of August, 2014, the gem ecosystem was ripe to move to RSpec 3.0. So unless you have a good reason, it’s best to wait maybe a couple of months after major upgrades are released before migrating.
  2. Bugs may be lurking in changed code. If you wait a bit, the early adopters will find the bugs, saving you time and frustration. The more popular a gem, the faster it will be put to rigorous use.
  3. *Security*/ problems may have been introduced. This is pretty much a special case of bugs, except that this a possibility of a malicious security change. If you wait a bit, hopefully somebody else will discover the issue first.

Don’t Use Guard, Zeus, Spring, Spork, Etc. When Upgrading

Tools that speed up Rails like Zeus and Spring are awesome productivity enhancers, except when upgrading gems. I found that they sometimes correctly reloaded new versions of gems. That means massive frustration when they are not picking up the gems you actually have specified. The corollary to this is to run your tests using plain rspec rather than the recommended ways for speeding up testing, such as the parallel_tests gem..

It’s not necessary to introduce the added complexity of the test accelerators when doing major library updates. Once you’ve updated your gems, then try out your favorite techniques for speeding up running tests. I’ve learned the hard way on this one. The pgr and pgk scripts below are awesome for ensuring that pre-loaders are NOT running.

1
2
3
4
5
6
7
8
9
10
11
pgr() {
  for x in spring rails phantomjs zeus; do 
    pgrep -fl $x;
  done
}

pgk() {
  for x in spring rails phantomjs zeus; do 
    pkill -fl $x;
  done
}

Tests: Try to Keep and Immediately Get Tests Passing

There are a lot of discussions about the value or lack of for an emphasis on Test-Driven Development (TDD). However, one thing that’s indisputable is that having a large library of tests is absolutely helpful for upgrading your gems.

Naturally, it’s an iterative process to get tests passing when updating gems. First, make sure your tests suite is passing.

You can try updating the gems one by one until you get a test failure. Then the issue becomes one of figuring out which related gems you might want to update to fix the test failure.

If you don’t have good tests coverage, a great place to start is with integration tests that do the basics of your app. At least you’ll be able to quickly verify a good chunk of your app can at least navigate the “happy path” as you iterate updating your gems.

Alternate Big or Baby Steps

If you’ve updated gems recently, sometimes you can run bundle update and everything works great. Recently, that strategy failed miserably when I tried going from Rails 4.0 with RSpec 2.2 to Rails 4.1 and RSpec 3. An eariler attempt shortly after the releases of Rails 4.1 and RSpec 3 clearly showed that many dependent gems would have to get updated. A few months later, I still had many issues with trying to update too much at once.

When this happens, take small steps and kept tests passing. I.e., don’t do a bundle update without specifying which gems to update. You might update 60 gems at once! And then when tests fail, you won’t be able to easily decipher which dependency is the problem. Specify which gems to update by running the command:

1
bundle update gem1 gem2 etc

Then after updating a few gems, run rspec and verify your tests pass.

Then commit your changes. Consider putting a summary of how many tests pass and how long it takes. The length of time is useful in case some change greatly increases test run time. Or if you notice run time or the number of tests dramatically decrease. Plus, this ensures you ran the test before committing!

On a related note, you can see which gems are outdated with this command: bundle outdated.

Specific Issues Upgrading Gems to Rails 4.1, RSpec 3, and Twitter Bootstrap 3.2

This article describes some tougher issues I faced when upgrading to Rails 4.1, Twitter Bootstrap 3.2 and RSpec 3. This is a companion to my related article on Rails Gem Upgrading Tips and Strategies.

Upgrade Links

If you’re upgrading these specific gems, here’s the must-see upgrade links.

  1. Rails 4.1: A Guide for Upgrading Ruby on Rails.
  2. RSpec 2 to RSpec 3.
  3. Twitter Bootstrap: Migrating to v3.x is essential if you’re going from 2.x to 3.x.

Troubleshooting with RubyMine “Find In Path” and the Debugger

After making the require code changes to address the deprecation errors going to rspec 3, I ran into the below obscure error. This one really stumped me, due to the fact that the stack trace did not give me a specific line causing the error, and when I ran the tests individually, I didn’t see any errors.

Failure/Error: Unable to find matching line from backtrace
PG::ConnectionBad: connection is closed

Here’s the stack trace:

Failure/Error: Unable to find matching line from backtrace
PG::ConnectionBad:
  connection is closed
# .rvm/gems/ruby-2.1.2@bpos/gems/activerecord-4.0.8/lib/active_record/connection_adapters/postgresql_adapter.rb:589:in `reset'
# .rvm/gems/ruby-2.1.2@bpos/gems/activerecord-4.0.8/lib/active_record/connection_adapters/postgresql_adapter.rb:589:in `reconnect!'
# .rvm/gems/ruby-2.1.2@bpos/gems/activerecord-4.0.8/lib/active_record/connection_adapters/abstract_adapter.rb:377:in `verify!'
# .rvm/gems/ruby-2.1.2@bpos/gems/activerecord-4.0.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:458:in `block in checkout_and_verify'
# .rvm/gems/ruby-2.1.2@bpos/gems/activesupport-4.0.8/lib/active_support/callbacks.rb:373:in `_run__2436983933572130156__checkout__callbacks'
# .rvm/gems/ruby-2.1.2@bpos/gems/activesupport-4.0.8/lib/active_support/callbacks.rb:80:in `run_callbacks'
# .rvm/gems/ruby-2.1.2@bpos/gems/activerecord-4.0.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:457:in `checkout_and_verify'
# .rvm/gems/ruby-2.1.2@bpos/gems/activerecord-4.0.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:358:in `block in checkout'
# .rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/monitor.rb:211:in `mon_synchronize'
# .rvm/gems/ruby-2.1.2@bpos/gems/activerecord-4.0.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:355:in `checkout'
# .rvm/gems/ruby-2.1.2@bpos/gems/activerecord-4.0.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:265:in `block in connection'
# .rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/monitor.rb:211:in `mon_synchronize'
# .rvm/gems/ruby-2.1.2@bpos/gems/activerecord-4.0.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:264:in `connection'
# .rvm/gems/ruby-2.1.2@bpos/gems/activerecord-4.0.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:546:in `retrieve_connection'
# .rvm/gems/ruby-2.1.2@bpos/gems/activerecord-4.0.8/lib/active_record/connection_handling.rb:79:in `retrieve_connection'
# .rvm/gems/ruby-2.1.2@bpos/gems/activerecord-4.0.8/lib/active_record/connection_handling.rb:53:in `connection'
# .rvm/gems/ruby-2.1.2@bpos/gems/activerecord-4.0.8/lib/active_record/fixtures.rb:450:in `create_fixtures'
# .rvm/gems/ruby-2.1.2@bpos/gems/activerecord-4.0.8/lib/active_record/fixtures.rb:899:in `load_fixtures'
# .rvm/gems/ruby-2.1.2@bpos/gems/activerecord-4.0.8/lib/active_record/fixtures.rb:870:in `setup_fixtures'
# .rvm/gems/ruby-2.1.2@bpos/gems/activerecord-4.0.8/lib/active_record/fixtures.rb:712:in `before_setup'
# .rvm/gems/ruby-2.1.2@bpos/gems/rspec-rails-3.0.2/lib/rspec/rails/adapters.rb:71:in `block (2 levels) in <module:MinitestLifecycleAdapter>'
...

The error was happening in a test that used resque_spec. After much searching, I began to suspect that some customization or optimization caused the issue.

RubyMine Find in Path

RubyMine’s Find in Path, searching Project and Libraries, is extremely useful to getting more context around an error message. In this case, RubyMine found the error message in a C file.

Here’s the C code containing the error message. The Ruby stack trace did not go this far:

1
2
3
4
5
6
7
8
9
10
11
12
13
/*
 * Fetch the data pointer and check it for sanity.
 */
PGconn *
pg_get_pgconn( VALUE self )
{
  PGconn *conn = pgconn_check( self );

  if ( !conn )
    rb_raise( rb_eConnectionBad, "connection is closed" );

  return conn;
}

And this is where in the Ruby Code that came from the stack trace:

1
2
3
4
5
6
7
# Disconnects from the database if already connected, and establishes a
# new connection with the database. Implementors should call super if they
# override the default implementation.
def reconnect!
  clear_cache!
  reset_transaction
end

RubyMine: Sometimes the Debugger Helps!

In the really troubling issue I saw below, I put in breakpoints in the connection adapter gem. I correctly guessed the cause of the error was disconnect! rather than the reconnect!

Here’s a few images that show how the debugger really helped me figure out the obscure “connection is closed” error:

That is what led me to try out removing the heroku-resque gem, as I noticed that was what was closing the connections in my test runs. Removing that gem fixed my rspec errors with the upgrades.

Note, an alternative to using breakpoints in RubyMine would have been to put in a puts caller in the suspect methods of the libraries. However, one would have to remember to remove that later! I think the debugger was a good pick for this issue. If you don’t use RubyMine, you might try the ruby debugger or the pry gem.

Rails 4.1 Errors

shuffle! removed from ActiveRecord::Relation

NoMethodError:
  undefined method `shuffle!' for #<ActiveRecord::Relation []>

The fix for that is to convert the relation to an array before calling shuffle. Naturally, you only want to do this with a limited set of data.

Flash changes

This one bit me: http://guides.rubyonrails.org/upgrading_ruby_on_rails.html#flash-structure-changes

I was comparing symbols when converting from the flash type to the bootstrap class. Since the keys are always normalized to strings, I changed the code to compare to strings.

It’s a good idea to review all changes in that the Rails Upgrade Guide

Here’s the method where I was previously comparing the flash type to symbols rather than strings:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
def twitterized_type(type)
  # http://ruby.zigzo.com/2011/10/02/flash-messages-twitters-bootstrap-css-framework/
  case type
    when "alert"
      "warning"
    when "error"
      "danger"
    when "notice"
      "info"
    when "success"
      "success"
    else
      type.to_s
  end
end

Upgrading Twitter Bootstrap to 3.2 from 3.0

I had this bit of code in my scss files from the old Twitter Bootstrap.

1
2
3
4
// Sprite icons path
// -------------------------
$iconSpritePath: asset-url("glyphicons-halflings.png");
$iconWhiteSpritePath: asset-url("glyphicons-halflings-white.png");

Since I’m using the new 3.2 version of bootstrap-sass, I needed to do the following, per the details here:

  1. Delete the glyphicons-halflings.png and glyphicons-halflings-white.png files.
  2. Remove the reference shown above to the $iconSpritePath
  3. Add this line to my application.css.scss
1
@import "bootstrap-sprockets";
  1. Add this line to the Gemfile:
1
gem 'autoprefixer-rails'

Please let me know if this article helped you or if I missed anything!

Aloha,

Justin

Fast Tests: Comparing Zeus With Spring on Rails 4.1 and RSpec 3

What’s faster? Zeus with Parallel Tests or Spring, in the context of Rails 4.1, RSpec 3, Capybara 2.4, and PhantomJs?

The bottom line is that both work almost equivalently as fast, and the biggest difference for me concerned compatibility with the parallel_tests gem. Zeus works fine with Parallel Tests, although it makes little difference overall with or without Zeus. Spring doesn’t work with Parallel Tests, but you can work around this issue. So stick with Zeus if it works for you.

And regardless of using Spring or Zeus, the shell scripts provided below called pgr and pgk are essential for quickly listing or killing Zeus, Spring, Rails, or Phantomjs processes!

It’s also worth noting that biggest advantage of using the Zeus or Spring pre-loaders is to save the Rails startup time. On my machine, this is about 3 to 5 seconds. That matters a lot if the test I’m focusing upon only takes a second or two, such as when doing TDD. However, when running a whole test suite taking minutes, 3-5 seconds can get swallowed up by other things, such as rspec-retry, which retries failing capybara tests.

Overview

I’ve written about my integration testing setup: Capybara, PhantomJs, Poltergeist, and Rspec Tips. For a while, I’ve been eager to upgrade to Rails 4.1 and RSpec 3. Finally, in August, 2014, the gem ecosystem allowed this to happen! I’ve got a related article on my tips for upgrading to Rails 4.1 and RSpec 3.

Once I had upgraded nearly every gem in my client’s large Rails project to the latest gem versions, I was pleasantly surprised that I could once again get Zeus, Guard, RSpec, Capybara, Poltergeist, Parallel Tests, etc. to all play nicely together.

Always curious as to the value of the latest defaults in Rails, I decided to try out Spring. Both Spring and Zeus preload Rails so that you don’t have to pay the same start up cost for evry test run. Here’s a RailsCast on the topic: #412 Fast Rails Commands.

The end results is that both Zeus and Spring give great results and are very similar in many ways. The biggest difference for me is that only Zeus (and not Spring) works with Parallel Tests. Interestingly, I got very similar results when using Parallel Tests with our without Zeus. It turns out that it is possible to run Parallel Tests with Spring installed so long as you disable it by setting the environment variable like this: DISABLE_SPRING=TRUE parallel_rspec -n 6 spec.

The bottom line for me is that I don’t have any good reason to move away from Zeus to Spring, and the fact that Spring is part of stock Rails is not a sufficient reason for me. That being said, on another project which is smaller, I’m not motivated to switch from Spring to Zeus.

Performance

Note in below commands, one must insert zeus in the command to be using zeus. If using Spring, be sure that you’re using the Spring modifed binstub scripts in your bin directory by having your path appropriately set or using bin/rake and bin/rspec (install spring-commands-rspec).

The times shown below are from both sample runs of a single directory of non-integration specs and from the full test suite of 914 tests, many of which are Capybara tests, on a 2012, Retina, SSD, 16 GB, MacBook Pro while running Emacs, RubyMine, Chrome, etc. Times were gathered by running commands prefixed with the time command. Running zeus rspec seems a bit slower than using spring. However, when running the integration tests, my test execution time was always variable depending on the number of Capybara timeouts and retries.

command zeus loader spring loader no loader
rspec spec/utils 0:19.1 0:17.7 0:22.8
rake spec:utils 0:15.6 0:17.9 0:18.1
rake spec 6:11.9 6:15.0 8:02.5
rspec spec 5:51:7 5:28.0 5:37.2
parallel_rspec -n 6 spec 2:28.7 n/a 2:28.0

Zeus and Spring vs. plain RSpec

Here’s some advantages and disadvantages of using either either Zeus or Spring compared to plain RSpec.

Advantages

  1. Both save time for running basic commands like rspec, rake, rails, etc. The performance of both is very similar.

Disadvantages

  1. Both can be extremely confusing when they fail to update automatically. This tends to happen after updating gems or running database migrations. You end up yak shaving when you don’t see your changes taking effect! I.e., put in some print statements, and then you don’t see them shown when they should. Arghhhh!
  2. Rspec-retry seems essential in dealing with random Capybabara failures with either Zeus or Spring. I often see less of these errors when I don’t use Zeus/Spring nor parallel_tests.

Zeus vs. Spring

Advantages

  1. Zeus works with the parallel_tests gem. This more than halves my time for running my entire test suite. However, when writing this article, I found that it made little difference, at least when slowed down by sporadically failing capybara tests that are retried. That being said, I’m certain that Parallel Tests with Zeus is faster or at worse the same as without Zeus.

Disadvantages

  1. You need to start up separate shell process, running zeus start. An advantage of this is that if there’s a problem starting up, the output in the Zeus console window is fairly clear.
  2. You run the command “zeus rake” rather than just “rake”. Consequently, I made some shell aliases (see below).
  3. Zeus only uses the environment from when Zeus was started and ignores any environment variables when commands are run.

Spring vs. Zeus

Advantages

  1. Spring is a default part of Rails, so you know it’s well supported, and bugs will be fixed fast.
  2. Slightly simpler to install and use than Zeus.

Disadvantages

  1. Spring lacks support for parallel_tests. See this Github issue: incompatible with spring #309. You can, however run parallel_tests so long as run the command like this: time DISABLE_SPRING=TRUE parallel_rspec -n 6 spec. I.e., you need to set DISABLE_SPRING so that parallel_rspec does not use Spring.
  2. Spring is a bit opaque in terms of errors given there’s no console window. See README for how to see the Spring log.

Miscellaneous Tips

Be sure to disable either Zeus or Spring when updating gems. Consider restarting Zeus or Spring after a database migration. See the below scripts called pgr and pgk for seeing and killing Zeus/Spring related processes.

Relevant Gems Working For Me

The right combination of gems seem pretty critical in getting all the parts to play nicely together. As of August 15, 2014 the most recent compatible versions of the following gems worked well together. This means running “bundle update” without locking the gem versions.

capybara-screenshot (0.3.21)
capybara (2.4.1)
guard (2.6.1)
guard-bundler (2.0.0)
guard-livereload (2.3.0)
guard-rails (0.5.3)
guard-resque (0.0.5)
guard-rspec (4.3.1)
guard-unicorn (0.1.1)
parallel_tests (1.0.0)
poltergeist (1.5.1)
rails (4.1.4)
resque_spec (0.16.0)
rspec (3.0.0)
rspec-instafail (0.2.5)
rspec-its (1.0.1)
rspec-mocks (3.0.3)
rspec-rails (3.0.2)
rspec-retry (0.3.0)
vcr (2.9.2)
webmock (1.18.0)
zeus (0.13.3)
zeus-parallel_tests (0.2.4)

Zeus Shell Configuration (ZSH)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
echoRun() {
  START=$(date +%s)
  echo "> $1"
  eval time $1
  END=$(date +%s)
  DIFF=$(( $END - $START ))
  echo "It took $DIFF seconds"
}

alias zr='zeus rake'

alias parallel_prepare='rake parallel:prepare ; "rake parallel:rake\[db:globals\]" '

zps() {
  # Run parallel_rspec, using zeus, passing in number of threads, default is 6

  p=${1:-6}
  # Skipping zeus b/c env vars don't work with zeus

  # start zeus log level fata 
  # echoRun "SKIP_RSPEC_FOCUS=YES RSPEC_RETRY_COUNT=7 RAILS_LOGGER_LEVEL=4 zeus parallel_rspec -n $p spec"
  echoRun "zeus parallel_rspec -n $p spec"
}

# List processes related to rails
pgr() {
  for x in spring rails phantomjs zeus; do 
    pgrep -fl $x;
  done
}

# Kill processes related to rails
pgk() {
  for x in spring rails phantomjs zeus; do 
    pkill -fl $x;
  done
}

Please let me know if this article helped you or if I missed anything!

Aloha,

Justin

Pry, Ruby, and Fun With the Hash Constructor

I recently had a chance to pair with Justin Searls of TestDouble, and we got to chatting about pry and the odd Hash[] constructor. Here’s a few tips that you might find useful.

The main reason I use pry are:

  1. Testing Ruby syntax.
  2. Documentation and source code browsing.
  3. History support.
  4. cd into the an object to change the context, and ls to list methods of that object.

Pry Configuration

To install pry with rails, place this in your Gemfile

1
gem 'pry-rails', :group => :development

Then run bundle install. Then run rails console. That gets you the default pry configuration. At the bottom of this article is my ~/.pryrc (gist). Create that file and then run rails c (short for rails console).

You’ll see this useful reminder of the customizations:

Helpful shortcuts:
h  : hist -T 20       Last 20 commands
hg : hist -T 20 -G    Up to 20 commands matching expression
hG : hist -G          Commands matching expression ever used
hr : hist -r          hist -r <command number> to run a command
Samples variables
a_array: [1, 2, 3, 4, 5, 6]
a_hash: { hello: "world", free: "of charge" }

Testing syntax: Hash[]

The Hash[] method is one of the odder methods in Ruby, and oh-so-useful if you’re doing map, reduce types of operations.

For example, how do you transform all the keys in a hash to be uppercase?

How about if we try this in pry (note, a_hash defined in my .pryrc).

[1] (pry) main: 0> a_hash
{
    :hello => "world",
     :free => "of charge"
}
[2] (pry) main> a_hash.map { |k,v| [k.to_s.upcase, v] }
[
    [0] [
        [0] "HELLO",
        [1] "world"
    ],
    [1] [
        [0] "FREE",
        [1] "of charge"
    ]
]

OK, that gives us an Array of tuples.

Then run these two commands. _ is the value of the last expression.

> tmp = _
> Hash[tmp]
{
    "HELLO" => "world",
     "FREE" => "of charge"
}

Bingo! Now let’s dig into this a bit more.

Memoization with Hash

Hash has another unusual constructor useful for memoizing a method’s return value when parameters are involved. Justin Weiss wrote a good article explaining it: 4 Simple Memoization Patterns in Ruby (and One Gem).

Here’s a quick sample in Pry:

[5] (pry) main: 0> hh = Hash.new { |h, k| h[k] = k * 2 }
{}
[6] (pry) main: 0> hh[2]
4
[7] (pry) main: 0> hh[4]
8

You can even use an array for the key values:

[8] (pry) main: 0> hh = Hash.new { |h, k| h[k] = k[0] * k[1] }
{}
[9] (pry) main: 0> hh[[2,3]]
6
[10] (pry) main: 0> hh[[4,5]]
20

Browsing Documentation and Source

It’s super useful to be able to see the documentation for any method easily, which you can do by the ? command. Similarly, you can also see the source, by using $.

[3] (pry) main> ? Hash[]

From: hash.c (C Method):
Owner: #<Class:Hash>
Visibility: public
Signature: [](*arg1)
Number of lines: 12

Creates a new hash populated with the given objects.

Similar to the literal { _key_ => _value_, ... }. In the first
form, keys and values occur in pairs, so there must be an even number of
arguments.

The second and third form take a single argument which is either an array
of key-value pairs or an object convertible to a hash.

   Hash["a", 100, "b", 200]             #=> {"a"=>100, "b"=>200}
   Hash[ [ ["a", 100], ["b", 200] ] ]   #=> {"a"=>100, "b"=>200}
   Hash["a" => 100, "b" => 200]         #=> {"a"=>100, "b"=>200}

Hmmmm…. Hash[] also takes a plain array. Let’s try that:

[16] (pry) main: 0> a_array
[
    [0] 1,
    [1] 2,
    [2] 3,
    [3] 4,
    [4] 5,
    [5] 6
]
[17] (pry) main: 0> Hash[*a_array]
{
    1 => 2,
    3 => 4,
    5 => 6
}

Neat!

Also note that you can see instance methods by prefixing the method name with # or using an actual instance, like this:

[19] (pry) main: 0> ? Hash#keys

From: hash.c (C Method):
Owner: Hash
Visibility: public
Signature: keys()
Number of lines: 5

Returns a new array populated with the keys from this hash. See also
Hash#values.

   h = { "a" => 100, "b" => 200, "c" => 300, "d" => 400 }
   h.keys   #=> ["a", "b", "c", "d"]
[20] (pry) main: 0> ? a_hash.keys

Browsing History

History expansion in pry is also nice. As mentioned above, my .pryrc has 4 history aliases.

h  : hist -T 20       Last 20 commands
hg : hist -T 20 -G    Up to 20 commands matching expression
hG : hist -G          Commands matching expression ever used
hr : hist -r          hist -r <command number> to run a command

Let’s try those out. It’s import to note that the -T tails results after doing the grep of the whole history. I.e., the -T 20 strips the results down to the last 20 that matched.

Show last 20 commands.

[10] (pry) main: 0> h
1: a_hash
2: a_hash.map { |k,v| [key.upcase, v] }
3: a_hash.map { |k,v| [key.to_s.upcase, v] }
4: a_hash.map { |k,v| [k.upcase, v] }
5: a_hash.map { |k,v| [k.to_s.upcase, v] }
6: tmp = _
7: Hash[tmp]
8: ? Hash[]
9: $ Hash[]

Grep all commands for upcase and show last 20 matches.

[11] (pry) main: 0> hg upcase
2: a_hash.map { |k,v| [key.upcase, v] }
3: a_hash.map { |k,v| [key.to_s.upcase, v] }
4: a_hash.map { |k,v| [k.upcase, v] }
5: a_hash.map { |k,v| [k.to_s.upcase, v] }

Grep all commands for upcase and show all. The history of my example is short so below is the same as above. If the history were longer, as it typically will be, then you might get pages of results!

[12] (pry) main: 0> hG upcase
 2: a_hash.map { |k,v| [key.upcase, v] }
 3: a_hash.map { |k,v| [key.to_s.upcase, v] }
 4: a_hash.map { |k,v| [k.upcase, v] }
 5: a_hash.map { |k,v| [k.to_s.upcase, v] }
11: hg upcase

cd and ls within Pry

I love to use cd and ls in pry.

  1. cd changes the context of pry, a bit like the current directory in the shell, except for Ruby objects. And classes are objects too!
  2. ls lists methods available on an object, a bit like listing files in the shell.
[22] (pry) main: 0> cd a_hash.keys
[26] (pry) main / #<Array>: 1> length
2
[27] (pry) main / #<Array>: 1> first
:hello
[28] (pry) main / #<Array>: 1> last
:free
[29] (pry) main / #<Array>: 1> ls
Enumerable#methods:
  all?  chunk           detect     each_entry  each_with_index   entries   find      flat_map  index_by  lazy   max     member?  min_by  minmax_by  one?           partition  slice_before  sum     to_table
  any?  collect_concat  each_cons  each_slice  each_with_object  exclude?  find_all  group_by  inject    many?  max_by  min      minmax  none?      original_grep  reduce     sort_by       to_set  to_text_table
JSON::Ext::Generator::GeneratorMethods::Array#methods: to_json_without_active_support_encoder
Statsample::VectorShorthands#methods: to_scale  to_vector
SimpleCov::ArrayMergeHelper#methods: merge_resultset
Array#methods:
  &    []=      clear        cycle       drop_while        fill        frozen?       inspect  permutation         push                  reverse       select     slice!      third                          to_gsl_integration_qaws_table        to_qaws_table  unshift
  *    abbrev   collect      dclone      each              find_index  grep          join     place               rassoc                reverse!      select!    sort        to                             to_gsl_vector                        to_query       values_at
  +    append   collect!     deep_dup    each_index        first       hash          keep_if  pop                 recode_repeated       reverse_each  shelljoin  sort!       to_a                           to_gslv                              to_s           zip
  -    as_json  combination  delete      empty?            flatten     in_groups     last     prefix              reject                rindex        shift      sort_by!    to_ary                         to_gv                                to_sentence    |
  <<   assoc    compact      delete_at   eql?              flatten!    in_groups_of  length   prepend             reject!               rotate        shuffle    split       to_csv                         to_h                                 to_xml
  <=>  at       compact!     delete_eql  extract_options!  forty_two   include?      map      pretty_print        repeated_combination  rotate!       shuffle!   suffix      to_default_s                   to_json                              transpose
  ==   blank?   concat       delete_if   fetch             fourth      index         map!     pretty_print_cycle  repeated_permutation  sample        size       take        to_formatted_s                 to_json_with_active_support_encoder  uniq
  []   bsearch  count        drop        fifth             from        insert        pack     product             replace               second        slice      take_while  to_gsl_integration_qawo_table  to_param                             uniq!
self.methods: __pry__
locals: _  __  _dir_  _ex_  _file_  _in_  _out_  _pry_

It’s worth noting that you can see the modules declaring the methods of the object.

To see more of what pry can do for you, simply type help at the command line.

My ~/.pryrc file

Create a file in your home directory called ~/.pryrc.

2014 Golden Gate Ruby Conference: Top 10 Reasons to Attend

Woo hoo! I’m going to the 2014 Golden Gate Ruby Conference. It’s at UCSF Mission Bay, San Francisco, September 19-20, 2014. I wrote an article about my experience last year, GoGaRuCo 2013: Community > Code. If you’re on the fence about attending, here’s my top reasons on why you should consider attending. I recommend not delaying signing up, as last year I saw folks begging for tickets once the conference sold out. According to Leah Silber, one of the conference organizers, GoGaRuCo has sold out every year, except for maybe year one.

Top 10 Reasons To Attend GoGaRuCo

  1. San Francisco is a great town to visit, and there’s no better month to visit than September as dense fog is least likely!
  2. It’s relatively small conference, compared to Rails Conf, and I find that much more engaging and relaxing. The attendees seem to be a mix of highly passionate Rubyists, mostly locals, with a mix from around the world.
  3. A one track conference is nice in that you don’t have to worry about picking which talks to attend.
  4. There’s a 15 minute break between each talk to socialize with fellow attendees or speakers. Socializing is why you come to these talks!
  5. Yehuda will likely come up with some interesting talk!
  6. Ruby programming is really more of an art and passion than work, and the people that attend GoGaRuCo reflect this!
  7. You’ll probably make a few new friends and leave inspired.
  8. The food is super, both at the conference and throughout the city. And the evening events last year were great as well.
  9. There’s probably going to be a job board, just in case that interests you.
  10. You won’t need any more T-shirts for another year!

Photography

I’m volunteering as the official photographer of GoGaRuCo. My mission is to “get 2-3 good shots of each speaker, a couple of audience shots during each days lunch and breaks, a shot or two of each exhibitor table, 2-3 team photos, and a smattering of everything else.” So please don’t be shy and ask to have your photograph taken.

Here’s a sample of shots I took at GoGaRuCo 2013. Tons more photos are linked here: GoGaRuCo 2013: Community > Code.

Available for Consulting

If you’d like to meet me around the time of GoGaRuCo, don’t hesitate to email me to try to meet up in person. Possibly you might have a project that could use my help?

On a personal note, I spent the better part of my adulthood in San Francisco, so I’ve got tons of friends there. All my consulting clients tend to be from the Bay Area as well.

Remote Pair Programming Tips Using RubyMine and Screenhero

I had the opportunity to spend the entire workday remote pair programming from my office in Maui with a San Francisco client from Cloud City Development. We used our normal tools of RubyMine, Chrome, and iTerm2 on a 27” Cinema Display shared via Screenhero. While remote will probably never be 100% as good as true in-person pairing, it’s getting very close! Here’s some tips for effective remote pair programming. Scroll down to the bottom for the TLDR if you’re short on time. Overall, I would highly recommend remote pairing with RubyMine on a full 27” Cinema Display, using an iPad with a Google Hangout for eye contact!

Here’s a very detailed video of how to do remote collaboration:

Telepresence Using Video Chat on iPad

Per the recommendation of Tim Connor of Cloud City Development, I started using an iPad for telepresence for only the video, using Google Hangouts, muting the microphone on the Hangout, and using the audio on Screenhero. While one can run Google Hangouts on the laptop, it can really suck up the CPU. Note, an iPhone or probably an Android phone or table would work equally as well. In terms of the audio, the microphone and speakers are better on the computer. If one is using the laptop for the telepresence video, and using multiple screens, it’s key to use the camera on the screen where one will be looking at the Hangout, and not at the Screenhero screen. As shown from the below pictures, it’s key that it’s obvious when the pairing partners are looking at each other versus at Screenhero. Incidentally, Screenhero did not suffer from any degradation when combined with the Google Hangout, regardless of using the Hangout on the laptop or mobile device.

In the below images, note where our eyes are focused.

Talking to each other, making eye contact

Both looking at screen

Talking to each other, making eye contact

Shaka from Steve and Justin

Screenhero

We both used Screenhero on Macs. I’ve done plenty of remote pair programming using Google Hangouts, but typically only one person sharing the screen drives the code. Screenhero allows true screen sharing such that both programmers can do the typing and mousing. With the shared screen being a 27” Cinema display, I set my Screenhero window to full screen and the resolution was nearly perfect. Yes, when scrolling and switching apps, there is a slight delay, but it was extremely manageable to the point that I almost would forget that I’m working on a computer 3000 miles away. Although there’s a slight slowness in seeing keys that you type, it’s minor enough that it’s not a nuisance. The dual cursor support works great. Here’s a video demo of the dual cursor support.

RubyMine IDE

Both I and my pairing partners were already using RubyMine, so using RubyMine was a natural choice over trying to pair with the conventional remote pairing setup of tmux and Vim. RubyMine combined with Screenhero, the same size big screens, fast computers, and very good broadband resulted in a productive pairing setup. One thing I hear about Vim setups is that pair programmers tend to not customize their Vim keymaps. With RubyMine, that’s not an issue thanks to a feature called “Quick Switch Scheme” which allows very fast switching of keyboard bindings. I’m a Vim user (IdeaVim), and I would have been crippled without my favorite RubyMine Vim bindings. I like the “Quick Switch” feature so much that I made a short screencast on this feature, displayed below.

RailsConf 2014

My Talk: Concerns, Decorators, Presenters, Service Objects, Helpers, Help me Decide

(Lack of) Live Coding in my Talk

Due to time constraints, I chose to skip the live coding I had prepared to do in my talk. Please let me know if you’d be interested in a screencast walking through the sample code. I will create one if there is sufficient demand.

Rocking With Tmux, Tmuxinator, Guard, Zeus, and iTerm2 for Rails Development

What’s the most effective way to:

  1. Start several different processes for Rails, such as Zeus, Rails server, rspec, resque, and the scheduler.
  2. Have the output for each process in a separate tab.
  3. Not have the process pause when you scroll the output, as happens in tmux.

Here’s a short demo of using tmuxinator to get a project running in several iterm2 tabs:

Why Guard?

I use Guard for:

  1. Automatically running rspec tests based on changes in either tests or source files. Together with Zeus, I haven’t found a faster way to get immediate feedback from tests. Pro tip: Learn how to use :focus with your specs to configure exactly what tests to have guard run.
  2. Automatically restarting the server when needed. For example, if you change gems or routes, you need to restart the server.

While I love running Guard with Zeus, Spring is the default in Rails 4.1, so I’ll probably give that a try in the near future.

Why Tmuxinator and Tmux?

Tmuxinator is awesome for configuring the layout of several processes.

Here’s a sample tmuxinator file.

When I run the command

1
mux my_project

And then I see the following. This is way easier than opening up tabs in iTerm2 and running commands every time.

The main problem with this setup is that if you scroll a window backwards (using the tmux keyboard bindings), and you don’t un-scroll, then the process pauses, such as the Rails server. That’s super annoying. Often I’m running specs, and I want to scroll back to see a stack trace, but that prevents the continuation of the test run! Here’s a short discussion of the issue.

Capybara PhantomJs Poltergeist Rspec Tips

I’ve added a page of tips on integration (aka feature spec) testing using Capybara, PhantomJs, Poltergeist, and Rspec.

Some of the tips include:

  1. Favorite test configuration (gems, spec_helper, etc.) for feature specs.
  2. How to troubleshoot and debug feature specs
  3. My setup for using Zeus with parallel-tests, including a rake task for setting up the databases.
  4. Tricky testing:
    1. Auto-complete dropdowns (some handy utility methods).
    2. Hover effects (easy now!)
    3. AJAX

I’ll try to keep this page of tips updated as my test configuration evolves.

Org-Mode Octopress Setup V2

Note: This is a refresh of my original post from April, 2013 to adjust for Emacs 24.3 and org-mode 8.2.x

WordPress seemed like a good blogging platform, but it just didn’t feel right. I spend all my day editing text files using vim key-bindings, and I love Org Mode for all non-coding writing. If you don’t know Org Mode, it’s like having Markdown mode on steroids. You can have a numbered list in Markdown, but org-mode lets you re-order the list, and that’s just the beginning. Editing blog documents in the WordPress editor felt almost as bad as being told to use MS Word. I found that ergonomics of Org Mode, including all the goodness of recent versions of Emacs, including Evil (Vim emulation), just made organization of creative thoughts so much more enjoyable.

So I bit the bullet one weekend, and dove into Octopress, publishing my first article, Octopress Setup with Github, Org Mode, and LiveReload. The solution presented in that article Introducing Octopress Blogging for Org-Mode stopped working when I upgraded Emacs to 24.3 and org-mode to 8.2.x. Here’s a rehash of my original article updated to the latest software versions as of March, 2014.

If you used to writing real web applications, rather than know the intricacies of a giant monolithic blogging platform, then the customization of Octopress seems so much more straightforward. This is so much more like the Unix philosophy that so many of us love, which is small and modular, rather than monolithic.