Rails on Maui

Programming in Paradise

Enums and Queries in Rails 4.1, and Understanding Ruby

Sometimes when you get puzzled by what Rails is doing, you really just to understand what Ruby is doing.

For example, given this simple code to get an attribute value:

1
2
3
4
# return value of some_attribute and foobar
def some_attribute_foobar
  "#{some_attribute} and foobar"
end

Beginners are often stumped by why this code does not set an attribute value:

1
2
3
4
5
6
# change the value of some_attribute to foobar
def change_some_attribute
  # why doesn't the next line set the some_attribute value to "foobar"?
  some_attribute = "foobar"
  save!
end

What’s going on?

In the first method, some_attribute is actually a method call which gets the attribute value of the record. This works in Rails ActiveRecord due to the Ruby feature of method_missing which allows some code to run when a method is called that does not exist.

In the second method, a local variable called some_attribute is getting assigned. There is no call to method_missing, as this is a variable assignment!

The correct code should have been:

1
2
3
4
5
# change the value of some_attribute to foobar
def change_some_attribute
  self.some_attribute = "foobar"
  save!
end

In this case, we’re calling the method some_attribute= on the model instance, and we get the expected result of assigning an attribute value.

Enums

For those not familiar with enums:

An enum type is a special data type that enables for a variable to be a set of predefined constants. The variable must be equal to one of the values that have been predefined for it.

Enums, introduced in Rails 4.1, are a place a lot of Ruby magic happens! It’s critical to understand Ruby well in order to understand how to use enums effectively. Let’s suppose we have this simple example, copied over from the Rails docs:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
class Conversation < ActiveRecord::Base
  enum status: [ :active, :archived ]
end

# conversation.update! status: 0
conversation.active!
conversation.active? # => true
conversation.status  # => "active"

# conversation.update! status: 1
conversation.archived!
conversation.archived? # => true
conversation.status    # => "archived"

# conversation.update! status: 1
conversation.status = "archived"

# conversation.update! status: nil
conversation.status = nil
conversation.status.nil? # => true
conversation.status      # => nil

So what’s going on in terms of Ruby meta-programming?

For all the enum values declared for Conversation, methods are created in the following forms. Let’s use the model Conversation, column “status”, and the enum “active” for this exampl:

method description
self.status Returns enum string value (not symbol, and not integer db value)
self.status=<enum_string_value or integer_value> Set the status to corresponding enum integer value using either a string, symbol, or integer. If you use an invalid value, you get an ArgumentError. String/symbol is converted to corresponding integer value.
self.active! Sets the status enum to “active”. This syntax is a bit confusing in that you don’t see the attribute you’re assigning! ArgumentError if invalid enum.
self.active? equivalent to (self.status = “active”), and *not* equivalent to (=self.status = :active=) due to symbols not being equal to strings!
Conversation.active equivalent to Conversation.where(status: "active"). Again, it’s a bit confusing not to see the column being queried.
Conversation.statuses Mapping of symbols to ordinal values { "active" \=> 0, "archived" \=> 1 }, of type HashWithIndifferentAccess, meaning you can use symbols or strings

Default Values for Enums

As the docs say, it’s a good idea to use the default value from the database declaration, like:

1
2
3
create_table :conversations do |t|
  t.column :status, :integer, default: 0, null: false
end

More specifically, consider using the first declared status (enum db value zero) be the default and to not allow null values. I’ve found that when I’ve allowed null values in enums, it makes all my code more complicated. This is an example of the Null Object Pattern. Nulls in your data and checking for these in your code will make your life more difficult! Instead, have an enum value for “I don’t know” if that really is a possibility, and make that first value, which is an index of zero, and you can set that as the database column default.

Queries on Enums

The docs say:

In rare circumstances you might need to access the mapping directly. The mappings are exposed through a class method with the pluralized attribute name

1
Conversation.statuses # => { "active" => 0, "archived" => 1 }

This is not rare! This is critical!

For example, suppose you want to query where the status is not “archived”:

You might be tempted to think that Rails will be smart enough to figure out that

1
Conversation.where("status <> ?", "archived")

Rails is not smart enough to know that the ? is for status and that is an enum. So you have to use this syntax:

1
Conversation.where("status <> ?", Conversation.statuses[:archived])

You might be tempted to think that this would work:

1
Conversation.where.not(status: :archived)

That throws an ArgumentError. Rails wants an integer and not a symbol, and symbol does not define to_i.

What’s worse is this one:

1
Conversation.where.not(status: "archived")

The problem is that ActiveRecord sees that the enum column is of type integer and calls #to_i on the value, so archived.to_i gets converted to zero. In fact, all your enums will get converted to zero! And if you use the value of the enum attribute on an ActiveRecord instance (say a Conversation object), then you’re using a string value!

If you’re curious what the Rails source is, then take a look here: ActiveRecord::Type::Integer.

Here’s a guaranteed broken bit of code:

1
2
# my_conversation.status is a String!
Conversation.where.not(status: my_conversation.status)

You’d think that Rails would be clever enough to see that the key maps to an enum and then check if the comparison value is a String, and then it would not call to_i on the String! Instead, we are effectively running this code:

1
Conversation.where.not(status: 0)

An acceptable alternative to the last code example would be:

1
Conversation.where.not(Conersation.statuses[my_conversation.status])

If you left out the not, you could also do:

1
Conversation.send(my_conversation.status)

However, I really would like to simply do these, all of which DO NOT work.:

1
2
3
Conversation.where(status: my_conversation.status)
Conversation.where(status: :archived)
Conversation.where(status: "archived")

Pluck vs Map with Enums

Here’s another subtle issue with enums.

Should these two lines of code give the same result or a different result:

1
2
statuses_with_map = Conversation.select(:status).where.not(status: nil).distinct.map(&:status)
statuses_with_pluck = Conversation.distinct.where.not(status: nil).pluck(:status)

It’s worth experimenting with this in the Pry console!

In the first case, with map, you get back an Array with 2 strings: ["active", "archived"]. In the second case, with pluck, you get back an Array with 2 integers: [0, 1].

What’s going on here?

In the code where map calls the status method on each Conversation record, the status method converts the database integer value into the corresponding String value!

In the other code that uses :pluck, you get back the raw database value. It’s arguable whether or not Rails should intelligently transform this value into the string equivalent, since that is what is done in other uses of ActiveRecord. Changing this would be problematic, as there could be code that depends on getting back the numerical value.

find_or_initialize_by, oh my!!!

Let’s suppose we have this persisted in the database:

Conversation {
  :id => 18,
  :user => 25            
  :status => "archived" (1 in database)
}

And then we do a find_or_initialize_by:

[47] (pry) main: 0> conversation = Conversation.find_or_initialize_by(user: 25, status: "archived")
  Conversation Load (4.6ms)  SELECT  "conversations".* FROM "conversations"
    WHERE "conversations"."user_id" = 25
       AND "conversations"."status" = 0 LIMIT 1
#<Conversation:> {
         :id => nil,
    :user_id => 25,
     :status => "archived"
}

We got nil for :id, meaning that we’re creating a new record. Wouldn’t you expect to find the existing record? Well, maybe not given the way that ActiveRecord.where works, per the above discussion.

Next, the status on the new record is created with “archived”, which is value 1. Hmmm….If you look closely above, the query uses

AND "conversations"."status" = 0

Let’s look at another example:

Conversation {
  :id => 19,
  :user => 26            
  :status => "active" (0 in database)
}

And then we do a find_or_initialize_by:

[47] (pry) main: 0> conversation = Conversation.find_or_initialize_by(user: 26, status: "active")
  Conversation Load (4.6ms)  SELECT  "conversations".* FROM "conversations"
    WHERE "conversations"."user_id" = 26
      AND "conversations"."status" = 0 LIMIT 1
#<Conversation:> {
         :id => 19,
    :user_id => 26,
     :status => "active"
}

Wow! Is this a source of subtle bugs and some serious yak shaving?

Note, the above applies equally to ActiveRecord.find_or_create_by.

It turns out that the Rails methods that allow creation of a record via a Hash of attributes will convert the enum strings to the proper integer values, but this is not case when querying!

Rails Default Accessors For Setting Attributes

You may find it useful to know which Rails methods call the “Default Accessor” versus just going to the database directly. That makes all the difference in terms of whether or not you can/should use the string values for enums.

The key thing is that that “Uses Default Accessor” means that string enums get converted to the correct database integer values.

Method Uses Default Accessor (converts string enums to integers!)
attribute= Yes
write_attribute No
update_attribute Yes
attributes= Yes
update Yes
update_column No
update_columns No
Conversation::update Yes
Conversation::update_all No

For more information on this topic, see

  1. Different Ways to Set Attributes in ActiveRecord by @DavidVerhasselt.
  2. Official API of ActiveRecord::Base
  3. Official Readme of Active Record – Object-relational mapping put on rails.

While these don’t mention Rails enums, it’s critical to understand that enums create default accessors that do the mapping to and from Strings.

So when you call these methods, the default accessors are used:

1
2
3
conversation.status = "archived"
conversation.status = 1
puts conversation.status # prints "archived"

So keep in mind when those default accessors are used per the above table.

Deep Dive: Enum Source

If you look at the Rails source code for ActiveRecord::Enum, you can see this at line 91, for the setter of the enum (I added some comments):

1
2
3
4
5
6
7
8
9
10
11
12
13
_enum_methods_module.module_eval do
  # def status=(value) self[:status] = statuses[value] end
  define_method("#{name}=") { |value|
    if enum_values.has_key?(value) || value.blank?
      # set the db value to the integer value for the enum
      self[name] = enum_values[value]
    elsif enum_values.has_value?(value) # values contains the integer
      self[name] = value
    else
      # enum_values did not have the key or value passed
      raise ArgumentError, "'#{value}' is not a valid #{name}"
    end
  }

From this definition, you see that both of these work:

1
2
conversation.status = "active"
conversation.status = 0

Here’s the definition for the getter, which I’ve edited a bit for illustrative purposes:

1
2
3
4
5
# def status() statuses.key self[:status] end
define_method(name) do
  db_value = self[name] # such as 0 or 1
  enum_values.key(db_value) # the key value, like "archived" for db_value 1
end

Recommendations to the Rails Core Team

In response to this issue, I submitted this github issue: Rails where query should see value is an enum and convert a string #17226

  1. @Bounga and @rafaelfranca on Github suggest that we can’t automatically convert enum string values in queries. I think that is true for converting cases of a ? or a named param, but I suspect that a quick map lookup to see that the attribute is an enum, and a string is passed, and then converting the string value to an integer is the right thing to do for 2 reasons:
    1. This is the sort of “magic” that I expect from Rails.
    2. Existing methods find_or_initialize_by and find_or_create_by will result in obscure bugs when string params are passed for enums.

    However, it’s worth considering if all default accessor methods (setters) should be consistently be called for purposes of passing values in a map to such methods. I would venture that Rails enums are some Rails provided magic, and thus they should have a special case. If this shouldn’t go into Rails, then possibly a gem extension could provide a method like Model.where_with_enum which would convert a String into the proper numerical value for the enum. I’m not a huge fan of the generated Model scopes for enums, as I like to see what database field is being queried against.

  2. Aside from putting automatic conversion of the enum hash attributes, I recommend we change the automatic conversion of Strings to integers to use the stricter Integer(some_string) rather than some_string.to_i. The difference is considerable, String#to_i is extremely permissive. Try it in a console. With the to_i method, any number characters at the beginning of the String are converted to an Integer. If the first character is not a number, 0 is returned, which is almost certainly a default enum value. Thus, this simple change would make it extremely clear when an enum string is improperly used. I would guess that this would make some existing code crash, but in all circumstances for a valid reason. As to whether this change should be done for all integer attributes is a different discussion, as that could have backwards compatibility ramifications. This change would require changing the tests in ActiveRecord::ConnectionAdapters::TypesTest. For example, this test:
    1
    
    assert_equal 0, type.type_cast_from_user('bad')
    

    would change to throw an exception, unless the cases are restricted to using Integer.new() for enums. It is inconsistent that some type conversions throw exceptions, such as converting a symbol to an integer. Whether or not they should is much larger issue. In the case of enums, I definitely believe that proper enum string value should not silently convert to zero every time.

Conclusion

I hope this article has convinced you that it’s worth understanding Ruby as much as it is to understand Rails. Additionally, the new Enum feature in 4.1 requires some careful attention!

Thanks to Hack Hands for supporting the development of this content. You can find a copy of this article in their blog.

Adding a JS LIbrary to a Ruby on Rails Project When Using Webpack

What’s it like to add a JavaScript library when Webpack is integrated into your Ruby on Rails environment, per my article: Fast Rich Client Rails Development With Webpack and the ES6 Transpiler?

It’s super easy! But what if you want some of your the legacy JavaScript or CoffeeScript code in your Rails app to access the Webpack added library?

Here’s a real world example. Suppose you want your JavaScript code to round numbers to decimal places. Math.round() only rounds decimal numbers to the nearest integer. A code sample on that page is provided to show you how round numbers off to some number of decimal places.

A quick Google for JavaScript libraries finds npm package compute-roundn. A look at the github repository for compute-io/roundn reveals clean code and it has some tests.

So should you copy some cribbed JavaScript code example or maybe copy-paste the source of some code into your /vendor/assets/javascripts directory? What’s the disadvantage of doing this?

  1. It becomes your problem to maintain this code. Imagine if you had to maintain all the code behind the Ruby Gems in your Rails project?
  2. If you copy the code, are you going to create some tests?
  3. What if this code depends on other JavaScript libraries? Or what if you later want a library that depends on this library?

There is a better way, by using npm packages. And yes, there are alternatives for Rails using Bower, but many more packages are available via npm than Bower. There is also the browserify-rails gem, and the steps below mostly apply to this gem.

Assuming that you’ve got your Rails codebase set up per my article article on Webpack in Rails, as shown in this sample Github repo: justin808/react-webpack-rails-tutorial, you’ll need to follow these steps.

  1. Google for the npm package that you wish to use. I typically Google “npm <some keywords>”. Then take a look at the code and see how popular it is. You’ll want to more carefully examine the code of less popular node packages, as with less popularity, there’s a great likelihood of unreported and unfixed bugs.
  2. In my case, I found the package for “compute-roundn”, so I ran this command:
    1
    
    npm install compute-roundn &#x2013;save
    
    That adds this entry to your /package.json
    1
    2
    
    { "dependencies": {
         "compute-roundn": "^1.0.0",
    
  3. Run the command to create /npm-shrinkwrap.json
    1
    
    npm-shrinkwrap
    
    It’s critical that you don’t forget to update this file, because if you forget, your Heroku build will fail, as the Node buildpack will not install your newly added package!
  4. If you needed this code for your module based Webpack code, then you just need to add in the require line at the top of relevant JavaScript file, like this (as show in the npm readme for roundn):
    1
    
    var roundn = require( 'compute-roundn' );
    
    Yipee. That’s it!
  5. If you need this library for your existing Rails JavaScript or CoffeeScript code, then you’ll need to globally export the library. Assuming that the module code based code is not needing this library as well, then you’ll want to edit your file called /webpack/scripts/rails_only.jsx and add this line:
    1
    
    window.roundn = require("compute-roundn");
    
    That file gets loaded by webpack.rails.config.js and not by running the Webpack Dev server.
  6. There is an alternative approach of modifying the webpack config file, if you were also referencing this library for some other Webpack bundled code. This, you would change webpack.rails.config.js file with these lines:
    exports.module.loaders = [{ test:
    1
    
    require.resolve("compute-roundn"), loader: "expose?roundn" }];
    
    Note, that doesn’t work unless you have other code loaded by Webpack that “requires” this package.
  7. When you deploy to Heroku, you see this:
    -----> Installing dependencies
           compute-roundn@1.0.0 node_modules/compute-roundn
    

    If you have problems with your Heroku deploy failing to install dependencies, check out this article in my forum: Notes on Deploying to Heroku with GSL and Node.

You maybe wondering, “why not just use the browserify-rails gem, which is slightly simpler in terms of setup. A good reason would be that you want to use JSX and ES6 transpilers with your JavaScript code. That was my reason.

That’s it! I hope you agree is way better than copy-pasting dependencies.

Fast Rich Client Rails Development With Webpack and the ES6 Transpiler

There has to be a better way to incorporate the JavaScript ecosystem into Rails.

Have you:

  1. Wondered if there’s a better way to utilize modern JavaScript client frameworks the context of an existing Ruby on Rails project?
  2. Gotten confused about how to integrate JavaScript libraries and examples that are packaged up into proper “modules”?
  3. Discovered the drawbacks of having all applications JavaScript littering the global name-space.
  4. Heard about ES6 (aka Harmony), the next version of JavaScript and how the cool kids in Silicon Valley (Facebook, Instagram, Square, etc.) are using ES6 syntax?

How would you like to achieve, within a Rails project:

  1. The ability to prototype a rich UI, seeing changes in JS and CSS/Sass code almost instantly after hitting save, without the page reloading.
  2. First class citizenship for utilizing the Node ecosystem, by specifying dependencies in package.json, running npm install, and then simply requiring modules in JavaScript files.
  3. Seamless integration of Node based JavaScript assets for the Rails Asset Pipeline, thus not circumventing the asset pipeline, but co-existing with it and leveraging it.
  4. The ability to plug the node client side ecosystem into an existing Rails project seamlessly.
  5. Utilization of many JavaScript tools, such as the React JSX tranpiler and ES6 transpiler.

This article will show you how you can utilize Webpack in your Rails development process to achieve these goals!

First, I’m going to tell you a brief story of how I came to the realization that there had to be a better way to incorporate JavaScript ecosystem into Rails.

What’s Wrong with Copying and Pasting Tons of JavaScript into /vendor/assets/javascripts?

Imagine doing Ruby on Rails projects without Bundler? Oh, the horror! Well that’s what copying tidbits of JavaScript into /vendor/assets/javascripts is like! It’s actually a bit worse than that, as many of these JavaScript libraries depend on either AMD (aka require.js) or CommonJs module syntax being available. (For a great explanation of how these module systems work, see Writing Modular JavaScript With AMD, CommonJS & ES Harmony.) This would be much more of a problem in the Rails community were it not for the fact that many popular JavaScript libraries are packaged into gems, such as the jquery-rails gem. You might think that works fine, until you start to encounter JavaScript modules that lacked Gems. For example, you may want to start leveraging the many npm packaged react components, such as react-bootstrap, or you may desire to leverage the JavaScript toolchain, such as the JSX and ES6 transpilers (es6-transpiler and es6-module-transpiler).

Thankfully, this experience has broken me away from the JavaScript baby bottle of gemified JavaScript! You can now become a 1st class JavaScript citizen!

Motivation: React and ES6

My foray down the Node rabbit hole began with a desire to use the React framework, including its JSX transpiler. In a nutshell, the React library stands out as unique, innovative, and impressive. You can simply think about the client-side UI as a set of components that are recursively composed and which render based on set of data that flows in one direction, from the top level component to each of its children. For further details into the benefits of React, see my article React on Rails Tutorial. For purposes of this article, you can imagine substituting my example of using React with your favorite rich client JavaScript framework. I’d be thrilled if somebody would fork my project and create a version using EmberJs.

At first this mission of integrating React seemed easy, as there is a Ruby Gem, the react-rails gem, that provided a relatively painless mechanism of integrating react into a Rails project. This is definitely the simplest method. I’ve created a tutorial, React on Rails Tutorial, with a companion github repository, justin808/react-rails-tutorial, that walks you through using the react-rails gem with the Rails 4.2 scaffold generator. Then I wanted to plug in the react-bootstrap library. With no gem available, I considered manually copy-pasting the source to my /vendor/assets/javascripts directory, but that just seemed to smell for the following reasons:

  1. JavaScript has a mature system for managing dependencies (packages & modules): npm (and bower).
  2. Dependencies often depend on other dependencies, in both the Ruby and JavaScript worlds. Imagine managing Ruby dependencies by hand.
  3. JavaScript modules often depend on either CommonJs or RequireJs being available.

(Side note: in terms of Node, a module is a special case of package that JavaScript code can require(). For more info, see the npm faq and Stack Overflow).

Here’s a good summary of other ways to handle the assets in a Rails app: Five Ways to Manage Front-End Assets in Rails. I briefly tried those techniques, plus the browserify-rails gem. However, they seemed to conflict with the react-rails gem, and if I didn’t use that gem, I’d need a way to convert the jsx into js files. This led me to try the webpack module bundler.

Webpack

What’s Webpack?

webpack takes modules with dependencies and generates static assets representing those modules.

Why did I try Webpack? It was recommended to me by Pete Hunt of the React team. Here’s some solid reasons for “why Webpack”:

  1. Leverages npm (and optionally bower) for package management.
  2. Supports whatever module syntax you prefer.
  3. Has loaders (think pipeline), including ES6 and JSX.
  4. Its Webpack Dev Server rocks for quick prototypes (Hot Module Replacement) of JS and CSS/Sass code.

A good place to get started with Webpack is Pete Hunt’s webpack-howto.

I initially tried the webpack module bundler separate from Rails, as I wanted to see the “hot reloading” of react code in action. You can try this sample code: react-tutorial-hot. Hot module Replacement changes the JS code (and possibly the CSS) of the running code without any page refresh. Thus any data in the JS objects sticks around! This is way cooler than Live Reload, which refreshes the whole browser page.

Then I started using these features of Webpack:

  1. es6-loader, which incorporates both of the es6-transpiler and the es6-module-transpiler. For fun, try out the ES6 syntax with the ES6 Fiddle. Here’s a great references on ES6 features.
  2. jsx-loader, which handles jsx files using es6.
  3. Trivial integration of any additional packages available via npm and the ability to use whichever module syntax is most convenient.

As Webpack generates a “bundle” that is not necessarily minified, it would seem that this could be incorporated into the Rails asset pipeline, and sure enough, it can be! This is well described in this article: Setting Up Webpack with Rails along with this example code to precompile with Webpack: Webpack In The Rails Asset Pipeline.

With the basic parts in place, I wanted achieve the following:

  1. Be able to prototype client side JS using Webpack Dev Server (with hot module replacement), while having this same code readily available in my Rails app. This involves having JavaScript, Sass, and Image files commonly available to both Rails and the Webpack Dev Server.
  2. Be able to easily deploy to Heroku.

My solution to the problem is shown in this github repo: justin808/react-webpack-rails-tutorial. This is based on my tutorial using the react-rails gem: Rails 4.2, React, completed tutorial. I will now describe this solution in detail.

Setup

You’ll need to install Node.js following. I’m assuming you already have Ruby and Rails installed.

  1. Node.js: You can find the Node.js download file here. Note, some friends of mine recommended the Node.js installer rather than using Brew. I did not try Brew.
  2. Many articles recommend running the following command, so that you don’t need to run node commands as sudo, thus changing the ownership of your /usr/local directory to yourself.
    1
    
    sudo chown -R $USER /usr/local
    
  3. Your /package.json file describes all other other dependencies, and running npm install will install everything required.

Once I got this working, it felt like Santa Clause came to my app with the whole Node ecosystem!

Bundler and Node Package Manager

All Rails developers are familiar with gems and Bundler (bundle). The equivalent for Javascript are package.json files with Node Package Manager (npm) (see discussion in next point on why not Bower).

Both of these package manager systems take care of retreiving dependencies from reputable online sources. Using a package.json file is far superior to manually downloading dependencies and copying the /vendor/assets/ directory!

Why NPM and not Bower for JS Assets?

The most popular equivalants for JavaScript are Node Package Manager (npm) and Bower. For use with webpack, you’ll want to prefer npm, per the reasons in the documentation:

In many cases modules from npm are better than the same module from bower. Bower mostly contain only concatenated/bundled files which are:

  • More difficult to handle for webpack
  • More difficult to optimize for webpack
  • Sometimes only useable without a module system

So prefer to use the CommonJs-style module and let webpack build it.

Webpack Plus Rails Solution Description

To integrate webpack with Rails, webpack is used in 2 ways:

  1. Webpack is used soley within the /webpack directory in conjunction with the Webpack Dev Server to provide a rapid tool for prototyping the client side Javascript. The file webpack.hot.config.js sets up the JS and CSS assets for the Webpack Dev Server.
  2. Webpack watches for changes and generates the rails-bundle.js file that bundles all the JavaScript referenced in the /webpack/assets/javascripts directory. The file webpack.rails.config.js converts the JSX files into JS files throught the JSX and ES6 transpilers.

The following image describes the organization of integrating Webpack with Rails.

File Notes and Description
/app/assets/javascripts/rails-bundle.js Output of webpack --config webpack.rails.config.js
/app/assets/javacripts/application.js Add rails-bundle so webpack output included in sprockets
/app/assets/javascripts Do not include any files used by Webpack. Place those files in /webpack/assets/javascripts
/app/assets/stylesheets/application.css.scss Reference sass files in /webpack/assets/stylesheets
/node_modules Where npm puts the loaded packages
/webpack All webpack files under this directory except for node_modules and package.json
/webpack/assets/images Symlink to /app/assets/images. Needed so that Webpack Dev Server can see same images referenced by Rails sprockets
/webpack/assets/javascripts javascripts are packaged into rails-bundle.js as well as used by the Webpack Dev Server
/webpack/assets/stylesheets stylesheets are used by the asset pipeline (referenced directly by /app/assets/stylesheets/application.css.scss) as well as used by the Webpack Dev Server
/webpack/index.html the default page loaded when testing the Webpack Dev Server
/webpack/scripts files used by only the Rails or Webpack Dev Server environments
/webpack/server.js server.js is the code to configure the Webpack Dev Server
/webpack/webpack.hot.config.js configures the webpack build for the Webpack Dev Server
/webpack/webpack.rails.config.js configures web pack to generate the rails-bundle.js file
/.buildpacks used to configure multiple node + ruby buildpacks for Heroku
/npm-shrinkwrap.json and /package.json define the packages loaded by running ‘npm install’

webpack.config

To reiterate, we needed Webpack for the following reasons:

  1. To enable the use of JS “modules”, using either the either the AMD (aka require.js) or CommonJs module syntax.
  2. To convert JSX files (ES6 and JSX syntax) into JS files. Note, you probably don’t want to blindly convert all JS files into ES6, as that may conflict with some imported modules.

This setup with the webpack.config file. We need 2 versions of this file for the two different needs, the Webpack Dev Sever and the Asset Pipeline.

Changing the webpack.config

You maybe wondering if you’ll need to edit these webpack config files. Here’s some things you’ll need to pay attention to.

  1. module.exports.entry: The entry points will determine what webpack places in the bundle. While this may seem similar to the manifest file of /app/assets/javascripts/application.js, it’s very different in that you only need to specify the entry points. So if you specify ./assets/javascripts/example (you don’t need the file suffix) is the entry point, then you do not and should not specify ./assets/javascripts/CommentBox as an entry point. Once again, dependencies are calculated for Webpack, unlike Rails.
    1
    2
    3
    4
    5
    
    module.exports = {
     context: __dirname,
     entry: [
       "./assets/javascripts/example"
     ],
    
  2. module.exports.externals: If you want to load jQuery from a CDN or from the Rails gem, you might specify:
    1
    2
    3
    
    module.exports.externals: {
      jquery: "var jQuery"
    },
    
  3. module.exports.module.loaders: This is the place where you can expose jQuery from your Webpack rails-bundle.js so that the rest of the non-module using parts of Rails can use jQuery.
    1
    2
    3
    4
    5
    6
    7
    8
    
    module.exports.module: {
      loaders: [
        // Next 2 lines expose jQuery and $ to any JavaScript files loaded after rails-bundle.js
        //   in the Rails Asset Pipeline. Thus, load this one prior.
        { test: require.resolve("jquery"), loader: "expose?jQuery" },
        { test: require.resolve("jquery"), loader: "expose?$" }
      ]
    }
    

That being said, it’s well worth familiarizing yourself with the documentation for webpack. The gitter room for webpack is also helpful.

Webpack Dev Server and Hot Module Replacement

While waiting for webpack to create the rails-bundle.js file and then reloading the Rails page is not terribly time consuming, there’s no comparison to using the Webpack Dev Server with Hot Module Replacement which loads new JavaScript and Sass code without modifying the existing client side data if possible. If you though Live Reload was cool, you’ll love this feature. To quote the documentation:

The webpack-dev-server is a little node.js express server, which uses the webpack-dev-middleware to serve a webpack bundle. It also has a little runtime which is connected to the server via socket.io. The server emit information about the compilation state to the client, which reacts on that events.

It serves static assets from the current directory. If the file isn’t found a empty HTML page is generated whichs references the corresponding javascript file.

In a nutshell, the file /webpack/server.js is the http server utilizing the Webpack Dev Server API:

  1. /webpack/webpack.hot.config.js configures the webpack assets.
  2. Has a couple of json responses.
  3. Configures “hot” to be true to enable hot module replacement.

JavaScripts

Webpack handles the following aspects of the /webpack/assets/javascripts directory:

  1. Preparing a “bundle” of the JavaScript files needed by either Rails or the Webpack Dev Server. This includes running the files through the jsx and es6 loaders which transpile the jsx and es6 syntax into standard javascripts. Heres’ the configuration that does the loading:
    1
    
    module.loaders = [{ test: /\.jsx$/, loaders: ["react-hot", "es6", "jsx?harmony"] }]
    
  2. Webpack also normalizes whichever module loading syntax you choose (RequireJs, CommonJs, or ES6).

Sass and images

For the Webpack Dev Server build (not the Rails build that creates rails-bundle.js), Sass is loaded via webpack for 2 reasons:

  1. Webpack takes care of running the sass compiler.
  2. Any changes made to sass or css files are loaded by the hot module loader into the browser.

The file /webpack/scripts/webpack_only.jsx contains this:

1
2
require("test-stylesheet.css");
require("test-sass-stylesheet.scss");

This “requires” stylesheet information just like a “require” of JavaScript. Thus, /webpack/index.html does not reference any output from the Sass generation. This file, webpack_only.jsx is referenced only in the webpack.hot.config.js file as an “entry point”, which means that it gets loaded explicitly in the created bundle file.

Images were a bit tricky, as during deployment, you want your images fingerprinted for caching purposes. This is nearly invisible to users of newer versions of the Rails, thanks to the fingerprinting feature of the Rails asset pipeline. While webpack can also fingerprint images, that’s not needed as we’re not depending on this feature of webpack for our Rails deployments. So we just need the Webpack Dev Server to access the same image files. I.e., we need to be able to use a syntax in the scss files to reference images that works for both the Webpack Dev Server as well as the Rails asset pipeline.

For example, here’s a snippet of sass code to load the twitter_64.png image from the top level of the /app/assets/images directory. This needs to work for both the Asset Pipeline as well as the Webpack Dev Server.

1
2
3
.twitter-image {
  background-image: image-url('twitter_64.png');
}

The problem of how to get the same images into the stylesheets of both Rails and Express server versions was solved by using a symlink, which git will conveniently store.

  1. /webpack/assets/images is a symlink for the /app/assets/images directory.
  2. The image-url sass helper takes care of mapping the correct directories for images. The image directory for the webpack server is configured by this line:
    1
    
    module.loaders = [{ test: /.scss$/, loader: style!css!sass?outputStyle=expanded&imagePath=/assets/images”}]
    

    The sass gem for rails handles the mapping for the Asset Pipeline.

  3. The symlink was necessary, as the Webpack Dev Server could not reference files above the root directory.

This way the images are signed correctly for production builds via the Rails asset pipeline, and the images work fine for the Webpack Dev Server.

Sourcemaps

When debugging JavaScript using the Rails app, I did not want to have to scroll through a giant rails-bundle.js of all js assets. Sourcemap support in Webpack addressed that issue. At first I tried to use plain sourcemaps (separate file rather than integrated), but that resulted in an off by one error. Furthermore, I had to do some fancy work to move the created file to the correct spot of /public/assets. Also note that building the sourcemap file when deploying to Heroku breaks the Heroku build. Both of these cases are handled at the bottom of the file webpack.rails.config.js.

This is what sourcemaps looks like in Chrome

Heroku Deployment

There are several things needed to get builds working on Heroku.

  1. It’s critical that package.json has all tools required by the Heroku build in dependencies and not devDependencies as Heroku only installs those modules in dependencies. You should use devDependencies for tools that only your local Webpack Dev Server uses.
  2. Clean up your build cache:
    1
    2
    
    heroku plugins:install https://github.com/heroku/heroku-repo.git
    heroku repo:purge_cache -a <my-app>
    
  3. Be sure to run npm-shrinkwrap after ANY changes to dependencies inside of package.json.
  4. I needed to configure the compile_environment task to create the rails-bundle.js via Webpack using the file /lib/tasks/assets.rake.
  5. Heroku needs both the node and ruby environments. In order to deploy to heroku, you’ll need run this command once to set a custom buildpack:
1
heroku config:add BUILDPACK_URL=https://github.com/ddollar/heroku-buildpack-multi.git

This runs the two buildpacks in the /.buildpacks file courtesy of the ddollar/heroku-buildpack-multi buildpack.

Why node_modules and package.json are not in the webpack directory?

While it would be tidier to put node_modules and package.json into the /webpack directory, the problem is that this would require a custom buildpack for installing the node_modules on Heroku.

Why Have a Second Assets Directory Under Webpack?

At first, I had Webpack reference the JSX files from the /app/assets/javascripts directory. However, I wanted to be able to use a WebStorm project just based on the JavaScript code. I’d either have to put the WebStorm project at the root level, thus including all the Ruby directories, or I could use a sym link to the javascripts directory. You NEVER want run two different JetBrains products simultaneously on the same directory, so that ruled out using WebStorm at the top of my Rails app. The symlink approach seemed to work, but that got confusing especially given I’d sometimes open the JSX files in Emacs.

The approach of putting the webpack bundled assets under the /webpack/assets directory worked out well for me. It seems natural that Webpack bundles those assets and puts them into the rails-bundle.js file in the /app/assets/javascripts directory.

For the same reasons, I’m keeping style sheets referenced by Webpack under the /webpack directory. Note, I’m using Webpack to load stylesheets, as that allows the style sheet changes to be hot loaded into the browser! If you edit any of the files in the /webpack/assets/stylesheets directory, you’ll see the browser update with the style changes almost immediately after you hit save. The standard Rails file /app/assets/stylesheets/application.css.scss references the file style sheets in /webpack/assets/stylesheets.

How to Add a NPM (JavaScript) module dependency?

This is a bit like modifying your Gemfile with a new gem dependency.

  1. Modify your /package.json file with the appropriate line for the desired package inside the “dependencies” section. You’ll want to specify an exact version, as that’s the recommendation in the Node community. Just google “npm <whatever module>” and you’ll get a link to the npm page for that module where you can see the version. For example, to add marked as a dependency, I added this line to package.json.
    1
    
    "marked": "^0.3.2",
    
  2. Include the appropriate line to require the module. For example, to include the marked library:
    1
    
    var marked = require("marked");
    

How to update Node Dependencies

When you’re ready to take the time to ensure that upgrading your packages will not break your code, you’ll want to take the following steps. Refer to npm-check-updates and npm-shrinkwrap.

1
2
3
4
5
6
cd <top level of your app>
rm -rf node_modules
npm install -g npm-check-updates
npm-check-updates -u
npm install
npm-shrinkwrap

Rapid Client Development

Congratulations! You’ve gotten through what I believe is the secret sauce for rapid client side JavaScript development. Once you get the setup, per the above steps, the flow goes like this:

  1. Run the Webpack Dev Server on port 3000
    1
    
    cd webpack && node server.js
    
  2. Point your browser at http://0.0.0.0:3000.
  3. Start another shell and run
    1
    
    foreman start -f Procfile.dev
    
  4. Point your browser at http://0.0.0.0:4000 and verify you can see the usage of the rails-bundle.js file.
  5. Update the jsx and scss files under /webpack/assets and see the browser at port 3000 update when files are saved.
  6. Start with static data in the JSX creation, and then move to having the server.js file vend JSON to the client.
  7. Once that works, have the rails server create the JSON.
  8. Deploy to Heroku!
  9. Prosper!

Acknowledgments

This work was inspired by a project for my client, Madrone Inc.. The founder clearly desired a UI that did not fit into the standard request/response HTML of Rails. If you want to work with me on this project, or other related projects, please email me.

I’d like to thank the following reviewers: Ed Roman, @ed_roman, Greg Lazarev, @gylaz, Geoff Evason, @gevason, Jose Luis Torres, @joseluis_torres, Mike Kazmier, @Kaztopia, John Lynch, @johnrlynch, Jonathan Soeder, @soederpop, and Ben Ward, @mauilabs.

Comments, suggestions, and corrections are appreciated! I hope to get a lively discussion the use of WebPack and Rails in my new discussion forum at http://forum.railsonmaui.com.

Thanks to Hack Hands for supporting the development of this content. You can find a copy of this article in their blog.

Updates

  1. 2014-09-22: Updated the Heroku Deployment section, including how dependencies vs. devDependencies in package.json.

React on Rails Tutorial

In response to a recent client request for a richer browser side UI experience, I took a fresh look at all the recent advances in JavaScript rich client frameworks. The React library stood out as unique, innovative, and impressive.

The main reasons that I like React are:

  1. It’s a better abstraction than MVC!
  2. React keeps track of what needs to change in the DOM with its virtual DOM model.
  3. All the view rendering code can assume that nothing changes during the rendering process as components recursively call render(). This makes reasoning about the rendering code much simpler.
  4. The simpler conceptual model of always rendering the entire UI from a given state is akin to the server side rendering of HTML pages, that Rails programmers are more familiar with.
  5. The documentation is very good, and it’s got significant traction.

Given that React is just about the View part of the client UI, or more specifically, view components, it seems especially suitable for integration into the Rails ecosystem to help build better rich JavaScript UIs. The React website contains a simple tutorial utilizing Node for the backend. Suppose you want to use Rails for the backend?

This following instructions walk you through the steps to build the original simple tutorial with a Rails 4.2 backend utilizing the react-rails gem. With the Rails scaffold generator, very little Rails coding is required. You can try the end result of the completed tutorial on Heroku, and the code on Github.

Since the original React tutorial is excellent, I will not be rehashing any of it’s explanations of how React works. This tutorial purely focusing on converting that tutorial to utilize Rails.

Besides carefully studying the original tutorial, I recommend:

  1. Watching these 2 videos for an introduction to React’s virtual DOM model. a. This video explains design philosophy of React and why MVC is not the right model for building UIs. b. This video compares ReactJs vs. Key Value Observation(EmberJs) and Dirty Checking (AngularJS).
  2. Play with the examples on the React overview page. Don’t just read the examples. You can play with the code right on that page!
  3. Read the docs, which I found fairly interesting.

Useful React Links

  1. Completed React-Rails tutorial Live on Heroku: Tutorial Live on Heroku.
  2. Rails 4.2, React, completed tutorial: Github repo for completed tutorial.
  3. React: A Javascript Library For Building User Interfaces: Main website for React.
  4. React Tutorial: The Node basis for this tutorial.
  5. reactjs/react-tutorial: Github repo for official Node based tutorial.

Tutorial Step by Step

Create a brand new Rails 4.2 Application

  1. Install Ruby 2.1.2 or whichever recent Ruby you prefer. I use rvm.
  2. Install Rails gem
    1
    
    gem install rails --pre
    

    NOTE: There is a bug if you RubyGems versions newer than 2.2.2. This detailed in this question on Stack Overflow.

  3. Create the Rails app
    1
    
    rails new react-rails-tutorial
    
  4. cd react-rails-tutorial
  5. Create .ruby-version and .ruby-gemset per your preferences inside react-rails-tutorial directory.
  6. Run bundler
    1
    
    bundle install
    
  7. Create new git repository
    1
    
    git init .
    
  8. Add and commit all files:
    1
    
    git add . && git commit -m "rails new react-rails-tutorial"
    

Create Base Rails App Scaffolding for Comment model

  1. Run generator. Be sure to use the exact names below to match the React tutorial.
    1
    
    rails generate scaffold Comment author:string text:text
    
  2. Migrate the database
    1
    
    rake db:migrate
    
  3. Commit
    1
    
    git add . && git commit -m "Ran rails generate scaffold Comment author:string text:text and rake db:migrate"
    

Create Page for App

  1. Run the controller generator
    1
    
    rails generate controller Pages index
    
  2. Fix your config/routes.rb to go to the home page, by changing
    1
    
    get 'pages/index'
    

    to

    1
    
    root 'pages#index'
    

Try Out the New Rails App

  1. Start the server
    1
    
    rails server
    
  2. Open your browser to http://0.0.0.0:3000 and see the your blank home page.
  3. Open your browser to http://0.0.0.0:3000/comments and see the comments display.
  4. Add a comment. Click around. Neat!

  5. Test out the json API, automatically created by Rails:
    1
    
    curl 0.0.0.0:3000/comments.json
    

    and see

    [{"id":1,"author":"Justin","text":"My first comment.","url":"http://0.0.0.0:3000/comments/1.json"}]%
    
  6. View your routes
    > rake routes                                                                                                                                    ✹ ✭ [19:44:29]
             Prefix Verb   URI Pattern                  Controller#Action
               root GET    /                            pages#index
           comments GET    /comments(.:format)          comments#index
                    POST   /comments(.:format)          comments#create
        new_comment GET    /comments/new(.:format)      comments#new
       edit_comment GET    /comments/:id/edit(.:format) comments#edit
            comment GET    /comments/:id(.:format)      comments#show
                    PATCH  /comments/:id(.:format)      comments#update
                    PUT    /comments/:id(.:format)      comments#update
                    DELETE /comments/:id(.:format)      comments#destroy
    
  7. If all that worked, then commit your changes
    1
    
    git add . && git commit -m "Ran rails generate scaffold Comment author:string text:text and rake db:migrate"
    

React Tutorial Using Node

This is what we’ll be converting to Rails 4.2.

  1. Create a new branch, in case we want to test the same design with AngularJS or EmberJS:
    1
    
    git checkout -b "react"
    
  2. Take a look at the React Tutorial and the github repo: reactjs/react-tutorial.
  3. Open up a new shell window. Pick a directory and then do
    1
    
    git clone git@github.com:reactjs/react-tutorial.git
    
  4. cd to the react-tutorial.git directory and open up the source code.
  5. Optionally run the tutorial example per the instructions on the README.md

Adding React to Rails

  1. We’ll be using the reactjs/react-rails gem. Plus we’ll need to include the showdown markdown parser, using the showdown-rails gem. Add these lines to your Gemfile and run bundle
    1
    2
    
    gem 'react-rails', github: 'reactjs/react-rails', branch: 'master'
    gem 'showdown-rails'
    

    Note, I’m using the tip of react-rails. Depending on when you try this tutorial, you may not wish to be using the tip, and don’t do that for a production application!

  2. Per the gem instructions, let’s add the js assets below the turbolinks reference in app/assets/javascripts/application.js
    1
    2
    3
    4
    5
    6
    
    //= require jquery
    //= require jquery_ujs
    //= require turbolinks
    //= require showdown
    //= require react
    //= require_tree .
    
  3. Once you verify that you can load 0.0.0.0:3000 in your browser, then commit the files to git:
    1
    
    git commit -am "Added react-rails and showdown-rails gems"
    

Move Tutorial Parts to Rails Application

Now the fun starts. Let’s take the parts out of the node tutorial and put them into the Rails app.

  1. Copy the necessary line from react-tutorial/index.html to replace the contents of app/views/pages/index.html.erb. You’ll just have one line there:
    1
    
    <div id="content"></div>
    
  2. Now, the meat of the tutorial, the JS code. Copy the entire contents of react-tutorial/scripts/example.js into app/assets/javascripts/comments.js.jsx (Renamed from comments.js.coffee).
  3. Commit the added files, so we can see what we change from the original versions.
    1
    
    git commit -am "index.html.erb and comments.js.jsx added"
    
  4. Start the rails server (rails s). Visit 0.0.0.0:3000. Nothing shows up!

Tweak the Tutorial

In the example, the call to load example.js comes after the declaration of the DOM element with id “content”. So let’s run the renderComponent after the DOM loads. Wrap the React.renderComponent call at the bottom of comments.js.jsx like this:

1
2
3
4
5
6
$(function() {
  React.renderComponent(
    <CommentBox url="comments.json" pollInterval={2000} />,
    document.getElementById('content')
  );
})

Let’s commit that diff: =git commit -am “React component loads”=

Then copy the css from react-tutorial/css/base.css over to app/assets/stylesheets/comments.css.scsss

The styling in is not quite right.

Add bootstrap-sass Gem

  1. Add the gems
    1
    2
    
    gem 'bootstrap-sass'
    gem 'autoprefixer-rails'
    
  2. Run bundle install
  3. Rename app/assets/stylesheets/application.css to application.css.scss and change it to the following:
    1
    2
    
    @import "bootstrap-sprockets";
    @import "bootstrap";
    
  4. Optionally, add this line to app/assets/javascripts/application.js
    1
    
    //= require bootstrap-sprockets
    
  5. Restart the application. Notice that there is no padding to the left edge of the browser window. That’s an easy fix. Let’s put the content div inside a container, by changing app/views/pages/index.html.erb to this:
    1
    2
    3
    
    <div class="container">
      <div id="content"></div>
    </div>
    
  6. Let’s spruce up the data entry part. Take a look at the Boostrap docs for CSS Forms. You’ll have to refer to the diffs on github for this change. Or you can take creative license here!

Adding Records Fails

The first issue is that we’re not submitting the JSON correctly to add new records.

Started POST "/comments.json" for 127.0.0.1 at 2014-08-22 21:48:55 -1000
Processing by CommentsController#create as JSON
  Parameters: {"author"=>"JG", "text"=>"Another **comment**"}
Completed 400 Bad Request in 1ms

ActionController::ParameterMissing (param is missing or the value is empty: comment):
  app/controllers/comments_controller.rb:72:in `comment_params'
  app/controllers/comments_controller.rb:27:in `create'

If you look at this method in comments_controller.rb, you can see the issue:

1
2
3
def comment_params
  params.require(:comment).permit(:author, :text)
end

The fix to this is to wrap the params in “comment”, by changing this line in comments.jsx.js, in function handleCommentSubmit.

1
data: comment,

to

1
data: { comment: comment },

Here’s a enlarged view of that diff from RubyMine.

After that change, we can observe this in the console when adding a new record:

Started POST "/comments.json" for 127.0.0.1 at 2014-08-22 21:55:18 -1000
Processing by CommentsController#create as JSON
  Parameters: {"comment"=>{"author"=>"JG", "text"=>"Another **comment**"}}
   (0.1ms)  begin transaction
  SQL (0.7ms)  INSERT INTO "comments" ("author", "created_at", "text", "updated_at") VALUES (?, ?, ?, ?)  [["author", "JG"], ["created_at", "2014-08-23 07:55:18.234473"], ["text", "Another **comment**"], ["updated_at", "2014-08-23 07:55:18.234473"]]
   (3.0ms)  commit transaction
  Rendered comments/show.json.jbuilder (0.7ms)
Completed 201 Created in 22ms (Views: 5.0ms | ActiveRecord: 3.9ms)

When Visiting Other Pages in the App

If you go to the url 0.0.0.0:3000/comments and look at browser console, you’ll see an error due the page load script looking for a component of id content that doesn’t exist. Let’s fix that by checking that the DIV with id content exists before calling React.renderComponent.

1
2
3
4
5
6
7
8
9
$(function() {
  var $content = $("#content");
  if ($content.length > 0) {
    React.renderComponent(
      <CommentBox url="comments.json" pollInterval={2000} />,
      document.getElementById('content')
    );
  }
})

Deploying to Heroku

It’s necessary to make a couple changes to the Gemfile. Use pg in production and add the rails_12factor gem.

1
2
3
4
gem 'sqlite3', group: :development
gem 'pg', group: :production

gem 'rails_12factor'

Turbolinks

If you’re going to have other pages in the application, it’s necessary to change when React.renderComponent is called, switching from document “ready” event to to the document “page:change” event. You can find more details at the Turbolinks Gem repo.

1
2
3
4
5
6
7
8
9
$(document).on("page:change", function() {
  var $content = $("#content");
  if ($content.length > 0) {
    React.renderComponent(
      <CommentBox url="comments.json" pollInterval={2000} />,
      document.getElementById('content')
    );
  }
})

Golden Gate Ruby Conference (GoGaRuCo) Pictures 2014

I took lots of great pictures at the 2014 Golden Gate Ruby Conference this year.

Overall, the conference was awesome. All the speakers seemed incredibly well prepared.

In case you haven’t heard, this was the last GoGaRuCo conference. Why? I heard that the costs for the facility are going up, especially the costs for catering. I also suspect that other new conferences, such as Ember Conf, are competing for attention. And certainly it’s been a huge undertaking for the conference organizers.

I’ve been toying around with creating a Rails on Maui Conference, and I’ve just created a forum for just this sort of discussion.

Should we have a Maui Rails Conference? Let’s discuss the possibility of such a conference here. I’d need at least several committed co-organizers in order for this to become a reality. A possible date would be next September, 2015, given that GoGaRuCo will no longer take place.

I’d propose having a smaller, less formal conference for the first year. I’ve got a very reasonably priced venue in mind that could take up to 100 participants.

Ideas? Want to help?

I’ve broken the pictures up into smaller sets of the best pictures which I’ve placed in Facebook albums. Then I’ve got the complete sets of images posted to Flickr.

If you need any full resolution, non-watermarked images, please get in touch with me.

Storing or Excluding Node Modules in Rails Git Repositories

It was and probably still is fashionable in the node community to check the dependencies into one’s git repository, and it may still be the case, per the following links. However, Rubyists use bundler, and I’ve never heard of checking gem dependencies into a Ruby project. So what do we do when we have Node dependencies in a Rails project?

Reasons to include node_modules in git

  1. Stack Overflow on why you should check node_modules into git and not have node_modules in your .gitignore.
  2. Mikeal Rogers’ post on this. Note, this post was from 2011. He says:

    Why can’t I just use version locking to ensure that all deployments get the same dependencies?

    Version locking can only lock the version of a top level dependency. You lock your version of express to a particular version and you deploy to a new machine 3 weeks later it’s going to resolve express’s dependencies again and it might get a new version of Connect that introduces subtle differences that break your app in super annoying and hard to debug ways because it only ever happens when requests hit that machine. This is a nightmare, don’t do it.

    and concludes with:

    All you people who added node_modules to your gitignore, remove that shit, today, it’s an artifact of an era we’re all too happy to leave behind. The era of global modules is dead.”

    And so this was all true, but before node-shrinkwrap was released (see below)!

  3. The Node FAQ clearly states:
    1. Check node_modules into git for things you deploy, such as websites and apps.
    2. Use npm to manage dependencies in your dev environment, but not in your deployment scripts.

Reasons not to include node_modules in git

Including node_modules in your git repo greatly increases the potential file churn for files that your team did not create, thus making pull requests on github problematic due to large numbers of files with changes.

One problem with npm install is that while your package.json file may be locking down your dependency versions, it does not lock down your dependencies’ dependencies!

Instead, one can use npm-shrinkwrap to lock down all the dependencies, per this answer for Should “node-modules” folder be included in the git repository. It’s worth noting that supposedly Heroku will use npm-shrinkwrap.json, per this answer on Stack Overflow. Probably the best documentation for this is in the npm-install man page.

Conclusion

Consequently, I’m going with the approach not including node_modules in my git repository by:

  1. Using npm-shrinkwrap.
  2. Placing node_modules in my project specific .gitignore.

I’ll do this for my projects until I’m convinced of otherwise!

Updating My Blog to Octopress With Jekyll 2 and Discourse for Comments

This weekend I made the ambitious move to using Discourse.org for my blog and also upgrading Octopress to the latest version which supports Jekyll 2.0. Here’s my notes, so that you can evaluate if you want to do either of these, as well as how to do this efficiently.

Motivation

What motivated me to update Octopress? The main reason was that Octopress finally got upgraded from a beta version of Jekyll to Jekyll 2.x.

What motivated me to migrate comments to Discourse?

  1. I already wanted to create a forum for my website, so integrating blog comments seemed worth pursuing. This is what BoingBoing uses for its blog articles. Click on the “Discuss” link below any BoingBoing article and get taken to the Discourse topic for that article.
  2. I wanted to be able to have more engaging conversations with my programmer friends on the topics which I’m blogging about.

What’s super cool about doing the conversion?

  1. Discourse will automatically create topics for each of your blog posts. You can see that here: http://forum.railsonmaui.com/category/blog
  2. Discourse can import the Disqus comments from your blog!

    What this looks like on the blog, http://www.railsonmaui.com

    What this looks like on the forum, http://forum.railsonmaui.com:

Updating Octopress

Googling for upgrading octopress gave me my own article as the second match. It’s always a great reason to blog and have your notes indexed by Google!

I ran into one difficult issue with the upgrade. The issue was the very frustrating:

bin/ruby_executable_hooks:15: stack level too deep (SystemStackError)

How did I solve the problem?

Naturally the first thing to do is to google the error message. That was not particularly helpful.

Since I assumed that this problem would be pretty specific to my Octopress site, I guessed that the issue was related to a rogue Jekyll plugin.

I moved all my plugins that were not part of standard Octopress into a /plugins_removed directory, and then added back my plugins one at a time. That helped me narrow down the issue to the jekyll_alias_generator plugin, which sets up redirects when you change the URL of a published blog articles.

Then I clicked on the Issues icon for the jekyll_alias_generator and searched for stack level too deep and BINGO!

And here’s the solution: Stack level too deep error #14, which is to replace lines 73-75 in alias_generator.rb with this code:

1
2
3
(alias_index_path.split('/').size).times do |sections|
    @site.static_files << Jekyll::AliasFile.new(@site, @site.dest, alias_index_path.split('/')[1, sections + 1].join('/'), '')
end

Update: rather than using the AliasGenerator plugin, use: jekyll/jekyll-redirect-from

Another issue I hit was that I had a few template files that were using

layout: nil

This results in errors like:

Build Warning: Layout 'nil' requested in atom.xml does not exist.

This got changed in the recent version of Jekyll to use null, like this:

layout: null

So grep your files for layout: nil and change those to layout: null.

Installing Discourse for Blog Comments

This is well described in the following articles. I’ll give you my specific steps below.

  1. Setting up discourse on Docker: github: discourse/docker and discourse/docs/INSTALL-digital-ocean.md. You can probably do fine on a $10/month plan. The trickiest parts is to be sure that you do every step very carefully. It’s very easy to make one typo and to then slow the process down!
  2. Embedding Discourse in Static Sites is the primary source of information on converting from Disqus to Discourse for your blog comments.
  3. Discourse plugin for static site generators like Jekyll or Octopress: Specifics for Octopress and Jekyll.

Once you configure your Discourse site to import your blog articles, you’ll have to wait a bit for the rake task to run. It’s great being able to kickstart the content of the forum with one’s blog articles!

Discourse Configuration

The configuration of Discourse for blogging is super easy.

  1. Configure the following settings, taking note that:
    1. The urls are to your blog and include the subdomain, like www.railsonmaui.com.
    2. The embeddable host does not include http://
    3. The feed polling URL does include http://

  2. I added a category called “Blog”.
  3. I created a user called “disqus” for users not found in the Disqus comment import.

Octopress Discourse Comments Setup

  1. Remove or comment out your Disqus setup in your /_config.yml file:
    1
    2
    3
    4
    
    # Disqus Comments
    # Removed as support for Discourse comments added
    # disqus_short_name: railsonmaui
    # disqus_show_comment_count: true
    

    Note, I first commented it out, because I toggled this on and off as I was ensuring that the comment migration worked correctly, and none were missed.

  2. Add the plugin contained in discourse_comments.rb to your /plugins directory. This plugin will append a DIV to your posts like this:
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    
    <div id="discourse-comments"></div>
    <script type="text/javascript">
      var discourseUrl = "#{@site.config['discourse_url']}",
          discourseEmbedUrl = "#{@site.config['url']}#{@site.config['baseurl']}#{url}";
    
      (function() {
        var d = document.createElement('script'); d.type = 'text/javascript'; d.async = true;
        d.src = discourseUrl + 'javascripts/embed.js';
        (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(d);
        })();
    </script>
    
  3. Note that the display of comments only works on your live website, due the fact that the Discourse server checks the source of the request for the comments (per the above image of the configuration).

Detailed instructions for importing your Disqus comments into Discourse

The following instructions will allow you to import the comments from Disqus, along with creating associated users for those comments. This is a GREAT way to kickstart the forum.

  1. Download an XML backup of your Disqus comments by logging into your Disqus dashboard. The URL is like https://youraccount.disqus.com/admin/discussions/.
  2. That should bring you to the Discussions tab. Then click the Export sub-tab. It should look like this: You’ll have to wait a few minutes for the creation email. I then saved the file to my ~/Downloads directory.
  3. Ssh to your docker instance
    1
    
    ssh root@XXX.XXX.XXX.XXX
    
  4. Get into your docker instance.
    root@forum:~# cd /var/discourse/
    root@forum:/var/discourse# ./launcher ssh app
    

    You’ll see this message:

    Welcome to Discourse Docker
    Use: rails, rake or discourse to execute commands in production
    
  5. Sudo to discourse:
    root@forum:~# sudo -iu discourse
    discourse@forum:~$ cd /var/www/discourse
    discourse@forum:/var/www/discourse$ bundle exec thor list
    
  6. Then you need to copy the XML file you downloaded from Disqus that contains an archive of your comments. The easiest way to do this is to scp the file from some place accessible on the Internet. What I did was to scp the file from my local machine to my Digital Ocean machine, and then from my Digital Ocean machine to the Docker container. Here’s an example:

    On your local machine, with the XML file (XXX.XXX.XXX.XXX is the ip of your droplet):

    1
    
    scp ~/Downloads/railsonmaui-disqus.xml root@XXX.XXX.XXX.XXX
    

    Then inside of your docker container:

    discourse@forum:/var/www/discourse$ scp root@XXX.XXX.XXX.XXX:railsonmaui-disqus.xml .
    

    That puts the file railsonmaui-disqus.xml in the current directory.

  7. Run the thor command:
    discourse@forum:/var/www/discourse$ bundle exec thor disqus:import --file=railsonmaui-disqus.xml --post-as=disqus --dry-run
    /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/activerecord-4.1.6/lib/active_record/connection_adapters/postgresql_adapter.rb:898:in `rescue in connect': FATAL:  database "discourse_development" does not exist (ActiveRecord::NoDatabaseError)
    Run `$ bin/rake db:create db:migrate` to create your database
      from /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/activerecord-4.1.6/lib/active_record/connection_adapters/postgresql_adapter.rb:888:in `connect'
    

    The problem is that we need to specify the environment, as is standard with Rails apps:

    1
    
    RAILS_ENV=production bundle exec thor disqus:import --file=railsonmaui-disqus.xml --post-as=disqus --dry-run
    

    That command does the trick and gives you a nice message indicating what it will do once you remove the --dry-run flag.

    discourse@forum:/var/www/discourse$ RAILS_ENV=production bundle exec thor disqus:import --file=railsonmaui-disqus.xml --post-as=disqus --dry-run
    Creating Favorite RubyMine Tips - Rails on Maui... (8 posts)
    Creating Octopress Setup with Github, Org Mode, and LiveReload - Rails on Maui... (3 posts)
    

    Once you verify, run:

    1
    
    RAILS_ENV=production bundle exec thor disqus:import --file=railsonmaui-disqus.xml --post-as=disqus
    

    This creates the comments and the users. Creating the users surprised me as I didn’t know that the Disqus export contained the users’ email addresses. So this script ends up triggering activation emails to all those users!

Conclusion

This is all pretty neat! Not only did I get my new forum populated with some content, but I also created users who commented on my posts in the past. I’m hoping I can engage in more meaningful discussions regarding the technologies that I blog about with my own forum. Please do sign-up for the forum so you can comment and receive periodic updates of what gets posted! Or just sign up when you want to post a comment. :-)

Pry, Ruby, Array#zip, CSV, and the Hash[] Constructor

A couple weeks ago, I wrote a popular article, Pry, Ruby, and Fun With the Hash Constructor demonstrating the usefulness of pry with the Hash bracket constructor. I just ran into a super fun test example of pry that I couldn’t resist sharing!

The Task: Convert CSV File without Headers to Array of Hashes

For example, you want to take a csv file like:

|---+--------+--------|
| 1 | Justin | Gordon |
| 2 | Tender | Love   |
|---+--------+--------|

And create an array of hashes like this with column headers “id”, “first_name”, “last_name”:

1
2
3
4
5
6
7
8
9
10
11
12
[
    [0] {
               "id," => "1",
        "first_name" => "Justin",
         "last_name" => "Gordon"
    },
    [1] {
               "id," => "2",
        "first_name" => "Tender",
         "last_name" => "Love"
    }
]

You’d think that you could just pass the headers to the CSV.parse, but that doesn’t work:

1
2
3
4
5
6
7
8
[11] (pry) main: 0> col_headers = %w(id, first_name last_name)
[
    [0] "id,",
    [1] "first_name",
    [2] "last_name"
]
[12] (pry) main: 0> csv = CSV.parse(csv_string, headers: col_headers)
(pry) output error: #<NoMethodError: undefined method `table' for #<Object:0x007fdbfc8d5588>>

Using Array#zip

I stumbled upon a note about the CSV parser that suggested using Array#zip to add keys to the results created by the CSV parser when headers don’t exist in the file.

Using Array#zip? What the heck is the zip method? Compression?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
[1] (pry) main: 0> ? a_array.zip

From: array.c (C Method):
Owner: Array
Visibility: public
Signature: zip(*arg1)
Number of lines: 17

Converts any arguments to arrays, then merges elements of self with
corresponding elements from each argument.

This generates a sequence of ary.size _n_-element arrays,
where _n_ is one more than the count of arguments.

If the size of any argument is less than the size of the initial array,
nil values are supplied.

If a block is given, it is invoked for each output array, otherwise an
array of arrays is returned.

   a = [ 4, 5, 6 ]
   b = [ 7, 8, 9 ]
   [1, 2, 3].zip(a, b)   #=> [[1, 4, 7], [2, 5, 8], [3, 6, 9]]
   [1, 2].zip(a, b)      #=> [[1, 4, 7], [2, 5, 8]]
   a.zip([1, 2], [8])    #=> [[4, 1, 8], [5, 2, nil], [6, nil, nil]]

Hmmmm….Why would that be useful?

Here’s some pry command that demonstrate this. I encourage you to follow along in pry!

I first created a CSV string from hand like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[2] (pry) main: 0> csv_file = <<-CSV
[2] (pry) main: 0* 1, "Justin", "Gordon"
[2] (pry) main: 0* 2, "Avdi", "Grimm"
[2] (pry) main: 0* CSV
"1, \"Justin\", \"Gordon\"\n2, \"Avdi\", \"Grimm\"\n"
[3] (pry) main: 0> CSV.parse(csv_file) { |csv_row| p csv_row }
CSV::MalformedCSVError: Illegal quoting in line 1.
from /Users/justin/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/csv.rb:1855:in `block (2 levels) in shift'
[2] (pry) main: 0* 1, "Justin", "Gordon"
[2] (pry) main: 0* 2, "Avdi", "Grimm"
[2] (pry) main: 0* CSV
"1, \"Justin\", \"Gordon\"\n2, \"Avdi\", \"Grimm\"\n"
[3] (pry) main: 0> CSV.parse(csv_file) { |csv_row| p csv_row }
CSV::MalformedCSVError: Illegal quoting in line 1.
from /Users/justin/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/csv.rb:1855:in `block (2 levels) in shift'

Doooh!!!! That taught me that creating a legit CSV string is not as easy as it sounds.

Let’s create a legit csv string:

1
2
3
4
5
[4] (pry) main: 0> csv_string = CSV.generate do |csv|
[4] (pry) main: 0*   csv << [1, "Justin", "Gordon"]
[4] (pry) main: 0*   csv << [2, "Tender", "Love"]
[4] (pry) main: 0* end
"1,Justin,Gordon\n2,Tender,Love\n"

Notice, there’s no quotes around the single word names!

If I use CSV to parse this, we get the reverse result, the array of arrays, back:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[16] (pry) main: 0> CSV.parse(csv_string)
[
    [0] [
        [0] "1",
        [1] "Justin",
        [2] "Gordon"
    ],
    [1] [
        [0] "2",
        [1] "Tender",
        [2] "Love"
    ]
]
[17] (pry) main: 0> CSV.parse(csv_string).class
Array < Object

Ahh…Could we use the Hash[] constructor to convert these arrays into Hashes that place the proper keys?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
[18] (pry) main: 0> first_row = CSV.parse(csv_string).first
[
    [0] "1",
    [1] "Justin",
    [2] "Gordon"
]
[19] (pry) main: 0> col_headers = %w(id, first_name last_name)
[
    [0] "id,",
    [1] "first_name",
    [2] "last_name"
]
[20] (pry) main: 0> first_row.zip(col_headers)
[
    [0] [
        [0] "1",
        [1] "id,"
    ],
    [1] [
        [0] "Justin",
        [1] "first_name"
    ],
    [2] [
        [0] "Gordon",
        [1] "last_name"
    ]
]
[21] (pry) main: 0> Hash[ first_row.zip(col_headers) ]
{
         "1" => "id,",
    "Justin" => "first_name",
    "Gordon" => "last_name"
}

Bingo!

Now, let’s fix the array of arrays, creating an array called rows

1
2
3
4
5
6
7
8
9
10
11
12
13
[22] (pry) main: 0> rows = CSV.parse(csv_string)
[
    [0] [
        [0] "1",
        [1] "Justin",
        [2] "Gordon"
    ],
    [1] [
        [0] "2",
        [1] "Tender",
        [2] "Love"
    ]
]

Then the grand finale!

1
2
3
4
5
6
7
8
9
10
11
12
13
[24] (pry) main: 0> rows.map { |row| Hash[ col_headers.zip(row) ] }
[
    [0] {
               "id," => "1",
        "first_name" => "Justin",
         "last_name" => "Gordon"
    },
    [1] {
               "id," => "2",
        "first_name" => "Tender",
         "last_name" => "Love"
    }
]

And sure, you can do this all on one line by inlining the rows variable:

1
CSV.parse(csv_string).map { |row| Hash[ col_headers.zip(row) ] }

Using headers option in CSV?

Well, you’d think that you could just pass the headers to the CSV.parse, but that doesn’t work:

1
2
[12] (pry) main: 0> csv = CSV.parse(csv_string, headers: col_headers)
(pry) output error: #<NoMethodError: undefined method `table' for #<Object:0x007fdbfc8d5588>>

Well, what’s the doc?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[13] (pry) main: 0> ? CSV.parse

From: /Users/justin/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/csv.rb @ line 1278:
Owner: #<Class:CSV>
Visibility: public
Signature: parse(*args, &block)
Number of lines: 11

:call-seq:
  parse( str, options = Hash.new ) { |row| ... }
  parse( str, options = Hash.new )

This method can be used to easily parse CSV out of a String.  You may either
provide a block which will be called with each row of the String in turn,
or just use the returned Array of Arrays (when no block is given).

You pass your str to read from, and an optional options Hash containing
anything CSV::new() understands.

Hmmm…seems that passing the headers should have worked.

The CSV docs clearly state that the initialize method takes an option :headers

:headers If set to :first_row or true, the initial row of the CSV file will be treated as a row of headers. If set to an Array, the contents will be used as the headers. If set to a String, the String is run through a call of ::parse_line with the same :col_sep, :row_sep, and :quote_char as this instance to produce an Array of headers. This setting causes #shift to return rows as CSV::Row objects instead of Arrays and #read to return CSV::Table objects instead of an Array of Arrays.

So, what can we call on a new CSV object? Let’s list the methods.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[25] (pry) main: 0> ls CSV.new(csv_string, headers: col_headers)
Enumerable#methods:
  all?            count       each_entry        find        group_by  map      minmax     reject        sum         to_table
  any?            cycle       each_slice        find_all    include?  max      minmax_by  reverse_each  take        to_text_table
  as_json         detect      each_with_index   find_index  index_by  max_by   none?      select        take_while  zip
  chunk           drop        each_with_object  first       inject    member?  one?       slice_before  to_a
  collect         drop_while  entries           flat_map    lazy      min      partition  sort          to_h
  collect_concat  each_cons   exclude?          grep        many?     min_by   reduce     sort_by       to_set
CSV#methods:
  <<           col_sep            fcntl             header_convert     lineno      readline         skip_blanks?  to_io
  add_row      convert            field_size_limit  header_converters  path        readlines        skip_lines    truncate
  binmode      converters         fileno            header_row?        pid         reopen           stat          tty?
  binmode?     each               flock             headers            pos         return_headers?  string        unconverted_fields?
  close        encoding           flush             inspect            pos=        rewind           sync          write_headers?
  close_read   eof                force_quotes?     internal_encoding  puts        row_sep          sync=
  close_write  eof?               fsync             ioctl              quote_char  seek             tell
  closed?      external_encoding  gets              isatty             read        shift            to_i
instance variables:
  @col_sep     @field_size_limit   @headers  @parsers     @re_chars        @row_sep      @unconverted_fields
  @converters  @force_quotes       @io       @quote       @re_esc          @skip_blanks  @use_headers
  @encoding    @header_converters  @lineno   @quote_char  @return_headers  @skip_lines   @write_headers

How about this:

1
2
3
4
5
[14] (pry) main: 0> csv = CSV.new(csv_string, headers: col_headers).to_a
[
    [0] #<CSV::Row "id,":"1" "first_name":"Justin" "last_name":"Gordon">,
    [1] #<CSV::Row "id,":"2" "first_name":"Tender" "last_name":"Love">
]

Well, that’s getting closer.

How about if I just map those rows with a to_hash?

1
2
3
4
5
6
7
8
9
10
11
12
13
[16] (pry) main: 0> csv = CSV.new(csv_string, headers: col_headers).map(&:to_hash)
[
    [0] {
               "id," => "1",
        "first_name" => "Justin",
         "last_name" => "Gordon"
    },
    [1] {
               "id," => "2",
        "first_name" => "Tender",
         "last_name" => "Love"
    }
]

Bingo!

I hope you enjoyed this!

Rails Gem Upgrading Tips and Strategies

What are the best-practices for upgrading gems to newer versions? What sort of tips and techniques can save time and headaches?

I built this guide based on my real-world experiences over years of gem migrations, including a recent upgrade to Rails 4.1, RSpec 3.0, and Twitter Bootstrap 3.2. There are some more specific examples of errors you might encounter at this article on the Rails on Maui blog: Specific Issues Upgrading Gems to Rails 4.1, RSpec 3, and Twitter Bootstrap 3.2.

Why Update?

Here’s my favorite reasons for keeping gems relatively current:

  1. If you work on several projects, keeping the gems and ruby version consistent makes your coding more productive as you don’t have to keep adjusting for which version is which. Web searches tend to find relatively recent versions first. It’s relatively annoying to be yak shaving issues that turn out to be “oh, that doesn’t work in that older version of Rails”.
  2. Recent versions of gems will have fixes for bugs and security issues, in addition to new features. With popular open source projects, new bugs are quickly discovered and fixed.
  3. Updates are much easier if you stay relatively current. I.e., it’s much easier to update from Rails 4.0 to Rails 4.1 than to go from Rails 3.0 to Rails 4.1.

That being said, recent versions can have new bugs, so it’s best to avoid versions that are unreleased or that haven’t aged at least a few weeks.

Some Gems Will Be Way More Difficult to Update

Large libraries, like Rails, RSpec, Twitter Bootstrap, etc. are going to take more elbow grease to update. Typically if a major version number is updating, like Rails 3.x to 4.x and RSpec 2.x to 3.x, that’s going to require lots of code changes. Semantic versioning also comes into play. Going from Rails 3.x to Rails 4.x is more difficult than Rails 4.0 to Rails 4.1. There’s a similar story with RSpec 2.x to 2.99, compared to going to RSpec 3.0.

Techniques for Smoother Gem Upgrades

Locking Gem Versions

Unless you have a good reason, don’t lock a gem to a specific version as that makes updating gems more difficult. In general, consider only locking the major Rails gems, such as rails, RSpec, and bootstrap-sass, as these are the ones that will likely have more involved upgrades.

Don’t Upgrade Major Libraries Too Soon

3 Reasons to wait a bit before gem updates:

  1. Dependencies among gem libraries are not yet resolved. I had tried upgrading to RSpec 3 and Rails 4.1 a couple months ago, but it was apparent that I had to fix to many other gems to get them to work with RSpec 3. Thus, I retreated back to RSpec 2.99 for a while. Now, as of August, 2014, the gem ecosystem was ripe to move to RSpec 3.0. So unless you have a good reason, it’s best to wait maybe a couple of months after major upgrades are released before migrating.
  2. Bugs may be lurking in changed code. If you wait a bit, the early adopters will find the bugs, saving you time and frustration. The more popular a gem, the faster it will be put to rigorous use.
  3. *Security*/ problems may have been introduced. This is pretty much a special case of bugs, except that this a possibility of a malicious security change. If you wait a bit, hopefully somebody else will discover the issue first.

Don’t Use Guard, Zeus, Spring, Spork, Etc. When Upgrading

Tools that speed up Rails like Zeus and Spring are awesome productivity enhancers, except when upgrading gems. I found that they sometimes correctly reloaded new versions of gems. That means massive frustration when they are not picking up the gems you actually have specified. The corollary to this is to run your tests using plain rspec rather than the recommended ways for speeding up testing, such as the parallel_tests gem..

It’s not necessary to introduce the added complexity of the test accelerators when doing major library updates. Once you’ve updated your gems, then try out your favorite techniques for speeding up running tests. I’ve learned the hard way on this one. The pgr and pgk scripts below are awesome for ensuring that pre-loaders are NOT running.

1
2
3
4
5
6
7
8
9
10
11
pgr() {
  for x in spring rails phantomjs zeus; do 
    pgrep -fl $x;
  done
}

pgk() {
  for x in spring rails phantomjs zeus; do 
    pkill -fl $x;
  done
}

Tests: Try to Keep and Immediately Get Tests Passing

There are a lot of discussions about the value or lack of for an emphasis on Test-Driven Development (TDD). However, one thing that’s indisputable is that having a large library of tests is absolutely helpful for upgrading your gems.

Naturally, it’s an iterative process to get tests passing when updating gems. First, make sure your tests suite is passing.

You can try updating the gems one by one until you get a test failure. Then the issue becomes one of figuring out which related gems you might want to update to fix the test failure.

If you don’t have good tests coverage, a great place to start is with integration tests that do the basics of your app. At least you’ll be able to quickly verify a good chunk of your app can at least navigate the “happy path” as you iterate updating your gems.

Alternate Big or Baby Steps

If you’ve updated gems recently, sometimes you can run bundle update and everything works great. Recently, that strategy failed miserably when I tried going from Rails 4.0 with RSpec 2.2 to Rails 4.1 and RSpec 3. An eariler attempt shortly after the releases of Rails 4.1 and RSpec 3 clearly showed that many dependent gems would have to get updated. A few months later, I still had many issues with trying to update too much at once.

When this happens, take small steps and kept tests passing. I.e., don’t do a bundle update without specifying which gems to update. You might update 60 gems at once! And then when tests fail, you won’t be able to easily decipher which dependency is the problem. Specify which gems to update by running the command:

1
bundle update gem1 gem2 etc

Then after updating a few gems, run rspec and verify your tests pass.

Then commit your changes. Consider putting a summary of how many tests pass and how long it takes. The length of time is useful in case some change greatly increases test run time. Or if you notice run time or the number of tests dramatically decrease. Plus, this ensures you ran the test before committing!

On a related note, you can see which gems are outdated with this command: bundle outdated.

Specific Issues Upgrading Gems to Rails 4.1, RSpec 3, and Twitter Bootstrap 3.2

This article describes some tougher issues I faced when upgrading to Rails 4.1, Twitter Bootstrap 3.2 and RSpec 3. This is a companion to my related article on Rails Gem Upgrading Tips and Strategies.

Upgrade Links

If you’re upgrading these specific gems, here’s the must-see upgrade links.

  1. Rails 4.1: A Guide for Upgrading Ruby on Rails.
  2. RSpec 2 to RSpec 3.
  3. Twitter Bootstrap: Migrating to v3.x is essential if you’re going from 2.x to 3.x.

Troubleshooting with RubyMine “Find In Path” and the Debugger

After making the require code changes to address the deprecation errors going to rspec 3, I ran into the below obscure error. This one really stumped me, due to the fact that the stack trace did not give me a specific line causing the error, and when I ran the tests individually, I didn’t see any errors.

Failure/Error: Unable to find matching line from backtrace
PG::ConnectionBad: connection is closed

Here’s the stack trace:

Failure/Error: Unable to find matching line from backtrace
PG::ConnectionBad:
  connection is closed
# .rvm/gems/ruby-2.1.2@bpos/gems/activerecord-4.0.8/lib/active_record/connection_adapters/postgresql_adapter.rb:589:in `reset'
# .rvm/gems/ruby-2.1.2@bpos/gems/activerecord-4.0.8/lib/active_record/connection_adapters/postgresql_adapter.rb:589:in `reconnect!'
# .rvm/gems/ruby-2.1.2@bpos/gems/activerecord-4.0.8/lib/active_record/connection_adapters/abstract_adapter.rb:377:in `verify!'
# .rvm/gems/ruby-2.1.2@bpos/gems/activerecord-4.0.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:458:in `block in checkout_and_verify'
# .rvm/gems/ruby-2.1.2@bpos/gems/activesupport-4.0.8/lib/active_support/callbacks.rb:373:in `_run__2436983933572130156__checkout__callbacks'
# .rvm/gems/ruby-2.1.2@bpos/gems/activesupport-4.0.8/lib/active_support/callbacks.rb:80:in `run_callbacks'
# .rvm/gems/ruby-2.1.2@bpos/gems/activerecord-4.0.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:457:in `checkout_and_verify'
# .rvm/gems/ruby-2.1.2@bpos/gems/activerecord-4.0.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:358:in `block in checkout'
# .rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/monitor.rb:211:in `mon_synchronize'
# .rvm/gems/ruby-2.1.2@bpos/gems/activerecord-4.0.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:355:in `checkout'
# .rvm/gems/ruby-2.1.2@bpos/gems/activerecord-4.0.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:265:in `block in connection'
# .rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/monitor.rb:211:in `mon_synchronize'
# .rvm/gems/ruby-2.1.2@bpos/gems/activerecord-4.0.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:264:in `connection'
# .rvm/gems/ruby-2.1.2@bpos/gems/activerecord-4.0.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:546:in `retrieve_connection'
# .rvm/gems/ruby-2.1.2@bpos/gems/activerecord-4.0.8/lib/active_record/connection_handling.rb:79:in `retrieve_connection'
# .rvm/gems/ruby-2.1.2@bpos/gems/activerecord-4.0.8/lib/active_record/connection_handling.rb:53:in `connection'
# .rvm/gems/ruby-2.1.2@bpos/gems/activerecord-4.0.8/lib/active_record/fixtures.rb:450:in `create_fixtures'
# .rvm/gems/ruby-2.1.2@bpos/gems/activerecord-4.0.8/lib/active_record/fixtures.rb:899:in `load_fixtures'
# .rvm/gems/ruby-2.1.2@bpos/gems/activerecord-4.0.8/lib/active_record/fixtures.rb:870:in `setup_fixtures'
# .rvm/gems/ruby-2.1.2@bpos/gems/activerecord-4.0.8/lib/active_record/fixtures.rb:712:in `before_setup'
# .rvm/gems/ruby-2.1.2@bpos/gems/rspec-rails-3.0.2/lib/rspec/rails/adapters.rb:71:in `block (2 levels) in <module:MinitestLifecycleAdapter>'
...

The error was happening in a test that used resque_spec. After much searching, I began to suspect that some customization or optimization caused the issue.

RubyMine Find in Path

RubyMine’s Find in Path, searching Project and Libraries, is extremely useful to getting more context around an error message. In this case, RubyMine found the error message in a C file.

Here’s the C code containing the error message. The Ruby stack trace did not go this far:

1
2
3
4
5
6
7
8
9
10
11
12
13
/*
 * Fetch the data pointer and check it for sanity.
 */
PGconn *
pg_get_pgconn( VALUE self )
{
  PGconn *conn = pgconn_check( self );

  if ( !conn )
    rb_raise( rb_eConnectionBad, "connection is closed" );

  return conn;
}

And this is where in the Ruby Code that came from the stack trace:

1
2
3
4
5
6
7
# Disconnects from the database if already connected, and establishes a
# new connection with the database. Implementors should call super if they
# override the default implementation.
def reconnect!
  clear_cache!
  reset_transaction
end

RubyMine: Sometimes the Debugger Helps!

In the really troubling issue I saw below, I put in breakpoints in the connection adapter gem. I correctly guessed the cause of the error was disconnect! rather than the reconnect!

Here’s a few images that show how the debugger really helped me figure out the obscure “connection is closed” error:

That is what led me to try out removing the heroku-resque gem, as I noticed that was what was closing the connections in my test runs. Removing that gem fixed my rspec errors with the upgrades.

Note, an alternative to using breakpoints in RubyMine would have been to put in a puts caller in the suspect methods of the libraries. However, one would have to remember to remove that later! I think the debugger was a good pick for this issue. If you don’t use RubyMine, you might try the ruby debugger or the pry gem.

Rails 4.1 Errors

shuffle! removed from ActiveRecord::Relation

NoMethodError:
  undefined method `shuffle!' for #<ActiveRecord::Relation []>

The fix for that is to convert the relation to an array before calling shuffle. Naturally, you only want to do this with a limited set of data.

Flash changes

This one bit me: http://guides.rubyonrails.org/upgrading_ruby_on_rails.html#flash-structure-changes

I was comparing symbols when converting from the flash type to the bootstrap class. Since the keys are always normalized to strings, I changed the code to compare to strings.

It’s a good idea to review all changes in that the Rails Upgrade Guide

Here’s the method where I was previously comparing the flash type to symbols rather than strings:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
def twitterized_type(type)
  # http://ruby.zigzo.com/2011/10/02/flash-messages-twitters-bootstrap-css-framework/
  case type
    when "alert"
      "warning"
    when "error"
      "danger"
    when "notice"
      "info"
    when "success"
      "success"
    else
      type.to_s
  end
end

Upgrading Twitter Bootstrap to 3.2 from 3.0

I had this bit of code in my scss files from the old Twitter Bootstrap.

1
2
3
4
// Sprite icons path
// -------------------------
$iconSpritePath: asset-url("glyphicons-halflings.png");
$iconWhiteSpritePath: asset-url("glyphicons-halflings-white.png");

Since I’m using the new 3.2 version of bootstrap-sass, I needed to do the following, per the details here:

  1. Delete the glyphicons-halflings.png and glyphicons-halflings-white.png files.
  2. Remove the reference shown above to the $iconSpritePath
  3. Add this line to my application.css.scss
1
@import "bootstrap-sprockets";
  1. Add this line to the Gemfile:
1
gem 'autoprefixer-rails'

Please let me know if this article helped you or if I missed anything!

Aloha,

Justin

Fast Tests: Comparing Zeus With Spring on Rails 4.1 and RSpec 3

What’s faster? Zeus with Parallel Tests or Spring, in the context of Rails 4.1, RSpec 3, Capybara 2.4, and PhantomJs?

The bottom line is that both work almost equivalently as fast, and the biggest difference for me concerned compatibility with the parallel_tests gem. Zeus works fine with Parallel Tests, although it makes little difference overall with or without Zeus. Spring doesn’t work with Parallel Tests, but you can work around this issue. So stick with Zeus if it works for you.

And regardless of using Spring or Zeus, the shell scripts provided below called pgr and pgk are essential for quickly listing or killing Zeus, Spring, Rails, or Phantomjs processes!

It’s also worth noting that biggest advantage of using the Zeus or Spring pre-loaders is to save the Rails startup time. On my machine, this is about 3 to 5 seconds. That matters a lot if the test I’m focusing upon only takes a second or two, such as when doing TDD. However, when running a whole test suite taking minutes, 3-5 seconds can get swallowed up by other things, such as rspec-retry, which retries failing capybara tests.

Overview

I’ve written about my integration testing setup: Capybara, PhantomJs, Poltergeist, and Rspec Tips. For a while, I’ve been eager to upgrade to Rails 4.1 and RSpec 3. Finally, in August, 2014, the gem ecosystem allowed this to happen! I’ve got a related article on my tips for upgrading to Rails 4.1 and RSpec 3.

Once I had upgraded nearly every gem in my client’s large Rails project to the latest gem versions, I was pleasantly surprised that I could once again get Zeus, Guard, RSpec, Capybara, Poltergeist, Parallel Tests, etc. to all play nicely together.

Always curious as to the value of the latest defaults in Rails, I decided to try out Spring. Both Spring and Zeus preload Rails so that you don’t have to pay the same start up cost for evry test run. Here’s a RailsCast on the topic: #412 Fast Rails Commands.

The end results is that both Zeus and Spring give great results and are very similar in many ways. The biggest difference for me is that only Zeus (and not Spring) works with Parallel Tests. Interestingly, I got very similar results when using Parallel Tests with our without Zeus. It turns out that it is possible to run Parallel Tests with Spring installed so long as you disable it by setting the environment variable like this: DISABLE_SPRING=TRUE parallel_rspec -n 6 spec.

The bottom line for me is that I don’t have any good reason to move away from Zeus to Spring, and the fact that Spring is part of stock Rails is not a sufficient reason for me. That being said, on another project which is smaller, I’m not motivated to switch from Spring to Zeus.

Performance

Note in below commands, one must insert zeus in the command to be using zeus. If using Spring, be sure that you’re using the Spring modifed binstub scripts in your bin directory by having your path appropriately set or using bin/rake and bin/rspec (install spring-commands-rspec).

The times shown below are from both sample runs of a single directory of non-integration specs and from the full test suite of 914 tests, many of which are Capybara tests, on a 2012, Retina, SSD, 16 GB, MacBook Pro while running Emacs, RubyMine, Chrome, etc. Times were gathered by running commands prefixed with the time command. Running zeus rspec seems a bit slower than using spring. However, when running the integration tests, my test execution time was always variable depending on the number of Capybara timeouts and retries.

command zeus loader spring loader no loader
rspec spec/utils 0:19.1 0:17.7 0:22.8
rake spec:utils 0:15.6 0:17.9 0:18.1
rake spec 6:11.9 6:15.0 8:02.5
rspec spec 5:51:7 5:28.0 5:37.2
parallel_rspec -n 6 spec 2:28.7 n/a 2:28.0

Zeus and Spring vs. plain RSpec

Here’s some advantages and disadvantages of using either either Zeus or Spring compared to plain RSpec.

Advantages

  1. Both save time for running basic commands like rspec, rake, rails, etc. The performance of both is very similar.

Disadvantages

  1. Both can be extremely confusing when they fail to update automatically. This tends to happen after updating gems or running database migrations. You end up yak shaving when you don’t see your changes taking effect! I.e., put in some print statements, and then you don’t see them shown when they should. Arghhhh!
  2. Rspec-retry seems essential in dealing with random Capybabara failures with either Zeus or Spring. I often see less of these errors when I don’t use Zeus/Spring nor parallel_tests.

Zeus vs. Spring

Advantages

  1. Zeus works with the parallel_tests gem. This more than halves my time for running my entire test suite. However, when writing this article, I found that it made little difference, at least when slowed down by sporadically failing capybara tests that are retried. That being said, I’m certain that Parallel Tests with Zeus is faster or at worse the same as without Zeus.

Disadvantages

  1. You need to start up separate shell process, running zeus start. An advantage of this is that if there’s a problem starting up, the output in the Zeus console window is fairly clear.
  2. You run the command “zeus rake” rather than just “rake”. Consequently, I made some shell aliases (see below).
  3. Zeus only uses the environment from when Zeus was started and ignores any environment variables when commands are run.

Spring vs. Zeus

Advantages

  1. Spring is a default part of Rails, so you know it’s well supported, and bugs will be fixed fast.
  2. Slightly simpler to install and use than Zeus.

Disadvantages

  1. Spring lacks support for parallel_tests. See this Github issue: incompatible with spring #309. You can, however run parallel_tests so long as run the command like this: time DISABLE_SPRING=TRUE parallel_rspec -n 6 spec. I.e., you need to set DISABLE_SPRING so that parallel_rspec does not use Spring.
  2. Spring is a bit opaque in terms of errors given there’s no console window. See README for how to see the Spring log.

Miscellaneous Tips

Be sure to disable either Zeus or Spring when updating gems. Consider restarting Zeus or Spring after a database migration. See the below scripts called pgr and pgk for seeing and killing Zeus/Spring related processes.

Relevant Gems Working For Me

The right combination of gems seem pretty critical in getting all the parts to play nicely together. As of August 15, 2014 the most recent compatible versions of the following gems worked well together. This means running “bundle update” without locking the gem versions.

capybara-screenshot (0.3.21)
capybara (2.4.1)
guard (2.6.1)
guard-bundler (2.0.0)
guard-livereload (2.3.0)
guard-rails (0.5.3)
guard-resque (0.0.5)
guard-rspec (4.3.1)
guard-unicorn (0.1.1)
parallel_tests (1.0.0)
poltergeist (1.5.1)
rails (4.1.4)
resque_spec (0.16.0)
rspec (3.0.0)
rspec-instafail (0.2.5)
rspec-its (1.0.1)
rspec-mocks (3.0.3)
rspec-rails (3.0.2)
rspec-retry (0.3.0)
vcr (2.9.2)
webmock (1.18.0)
zeus (0.13.3)
zeus-parallel_tests (0.2.4)

Zeus Shell Configuration (ZSH)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
echoRun() {
  START=$(date +%s)
  echo "> $1"
  eval time $1
  END=$(date +%s)
  DIFF=$(( $END - $START ))
  echo "It took $DIFF seconds"
}

alias zr='zeus rake'

alias parallel_prepare='rake parallel:prepare ; "rake parallel:rake\[db:globals\]" '

zps() {
  # Run parallel_rspec, using zeus, passing in number of threads, default is 6

  p=${1:-6}
  # Skipping zeus b/c env vars don't work with zeus

  # start zeus log level fata 
  # echoRun "SKIP_RSPEC_FOCUS=YES RSPEC_RETRY_COUNT=7 RAILS_LOGGER_LEVEL=4 zeus parallel_rspec -n $p spec"
  echoRun "zeus parallel_rspec -n $p spec"
}

# List processes related to rails
pgr() {
  for x in spring rails phantomjs zeus; do 
    pgrep -fl $x;
  done
}

# Kill processes related to rails
pgk() {
  for x in spring rails phantomjs zeus; do 
    pkill -fl $x;
  done
}

Please let me know if this article helped you or if I missed anything!

Aloha,

Justin

Pry, Ruby, and Fun With the Hash Constructor

I recently had a chance to pair with Justin Searls of TestDouble, and we got to chatting about pry and the odd Hash[] constructor. Here’s a few tips that you might find useful.

The main reason I use pry are:

  1. Testing Ruby syntax.
  2. Documentation and source code browsing.
  3. History support.
  4. cd into the an object to change the context, and ls to list methods of that object.

Pry Configuration

To install pry with rails, place this in your Gemfile

1
gem 'pry-rails', :group => :development

Then run bundle install. Then run rails console. That gets you the default pry configuration. At the bottom of this article is my ~/.pryrc (gist). Create that file and then run rails c (short for rails console).

You’ll see this useful reminder of the customizations:

Helpful shortcuts:
h  : hist -T 20       Last 20 commands
hg : hist -T 20 -G    Up to 20 commands matching expression
hG : hist -G          Commands matching expression ever used
hr : hist -r          hist -r <command number> to run a command
Samples variables
a_array: [1, 2, 3, 4, 5, 6]
a_hash: { hello: "world", free: "of charge" }

Testing syntax: Hash[]

The Hash[] method is one of the odder methods in Ruby, and oh-so-useful if you’re doing map, reduce types of operations.

For example, how do you transform all the keys in a hash to be uppercase?

How about if we try this in pry (note, a_hash defined in my .pryrc).

[1] (pry) main: 0> a_hash
{
    :hello => "world",
     :free => "of charge"
}
[2] (pry) main> a_hash.map { |k,v| [k.to_s.upcase, v] }
[
    [0] [
        [0] "HELLO",
        [1] "world"
    ],
    [1] [
        [0] "FREE",
        [1] "of charge"
    ]
]

OK, that gives us an Array of tuples.

Then run these two commands. _ is the value of the last expression.

> tmp = _
> Hash[tmp]
{
    "HELLO" => "world",
     "FREE" => "of charge"
}

Bingo! Now let’s dig into this a bit more.

Memoization with Hash

Hash has another unusual constructor useful for memoizing a method’s return value when parameters are involved. Justin Weiss wrote a good article explaining it: 4 Simple Memoization Patterns in Ruby (and One Gem).

Here’s a quick sample in Pry:

[5] (pry) main: 0> hh = Hash.new { |h, k| h[k] = k * 2 }
{}
[6] (pry) main: 0> hh[2]
4
[7] (pry) main: 0> hh[4]
8

You can even use an array for the key values:

[8] (pry) main: 0> hh = Hash.new { |h, k| h[k] = k[0] * k[1] }
{}
[9] (pry) main: 0> hh[[2,3]]
6
[10] (pry) main: 0> hh[[4,5]]
20

Browsing Documentation and Source

It’s super useful to be able to see the documentation for any method easily, which you can do by the ? command. Similarly, you can also see the source, by using $.

[3] (pry) main> ? Hash[]

From: hash.c (C Method):
Owner: #<Class:Hash>
Visibility: public
Signature: [](*arg1)
Number of lines: 12

Creates a new hash populated with the given objects.

Similar to the literal { _key_ => _value_, ... }. In the first
form, keys and values occur in pairs, so there must be an even number of
arguments.

The second and third form take a single argument which is either an array
of key-value pairs or an object convertible to a hash.

   Hash["a", 100, "b", 200]             #=> {"a"=>100, "b"=>200}
   Hash[ [ ["a", 100], ["b", 200] ] ]   #=> {"a"=>100, "b"=>200}
   Hash["a" => 100, "b" => 200]         #=> {"a"=>100, "b"=>200}

Hmmmm…. Hash[] also takes a plain array. Let’s try that:

[16] (pry) main: 0> a_array
[
    [0] 1,
    [1] 2,
    [2] 3,
    [3] 4,
    [4] 5,
    [5] 6
]
[17] (pry) main: 0> Hash[*a_array]
{
    1 => 2,
    3 => 4,
    5 => 6
}

Neat!

Also note that you can see instance methods by prefixing the method name with # or using an actual instance, like this:

[19] (pry) main: 0> ? Hash#keys

From: hash.c (C Method):
Owner: Hash
Visibility: public
Signature: keys()
Number of lines: 5

Returns a new array populated with the keys from this hash. See also
Hash#values.

   h = { "a" => 100, "b" => 200, "c" => 300, "d" => 400 }
   h.keys   #=> ["a", "b", "c", "d"]
[20] (pry) main: 0> ? a_hash.keys

Browsing History

History expansion in pry is also nice. As mentioned above, my .pryrc has 4 history aliases.

h  : hist -T 20       Last 20 commands
hg : hist -T 20 -G    Up to 20 commands matching expression
hG : hist -G          Commands matching expression ever used
hr : hist -r          hist -r <command number> to run a command

Let’s try those out. It’s import to note that the -T tails results after doing the grep of the whole history. I.e., the -T 20 strips the results down to the last 20 that matched.

Show last 20 commands.

[10] (pry) main: 0> h
1: a_hash
2: a_hash.map { |k,v| [key.upcase, v] }
3: a_hash.map { |k,v| [key.to_s.upcase, v] }
4: a_hash.map { |k,v| [k.upcase, v] }
5: a_hash.map { |k,v| [k.to_s.upcase, v] }
6: tmp = _
7: Hash[tmp]
8: ? Hash[]
9: $ Hash[]

Grep all commands for upcase and show last 20 matches.

[11] (pry) main: 0> hg upcase
2: a_hash.map { |k,v| [key.upcase, v] }
3: a_hash.map { |k,v| [key.to_s.upcase, v] }
4: a_hash.map { |k,v| [k.upcase, v] }
5: a_hash.map { |k,v| [k.to_s.upcase, v] }

Grep all commands for upcase and show all. The history of my example is short so below is the same as above. If the history were longer, as it typically will be, then you might get pages of results!

[12] (pry) main: 0> hG upcase
 2: a_hash.map { |k,v| [key.upcase, v] }
 3: a_hash.map { |k,v| [key.to_s.upcase, v] }
 4: a_hash.map { |k,v| [k.upcase, v] }
 5: a_hash.map { |k,v| [k.to_s.upcase, v] }
11: hg upcase

cd and ls within Pry

I love to use cd and ls in pry.

  1. cd changes the context of pry, a bit like the current directory in the shell, except for Ruby objects. And classes are objects too!
  2. ls lists methods available on an object, a bit like listing files in the shell.
[22] (pry) main: 0> cd a_hash.keys
[26] (pry) main / #<Array>: 1> length
2
[27] (pry) main / #<Array>: 1> first
:hello
[28] (pry) main / #<Array>: 1> last
:free
[29] (pry) main / #<Array>: 1> ls
Enumerable#methods:
  all?  chunk           detect     each_entry  each_with_index   entries   find      flat_map  index_by  lazy   max     member?  min_by  minmax_by  one?           partition  slice_before  sum     to_table
  any?  collect_concat  each_cons  each_slice  each_with_object  exclude?  find_all  group_by  inject    many?  max_by  min      minmax  none?      original_grep  reduce     sort_by       to_set  to_text_table
JSON::Ext::Generator::GeneratorMethods::Array#methods: to_json_without_active_support_encoder
Statsample::VectorShorthands#methods: to_scale  to_vector
SimpleCov::ArrayMergeHelper#methods: merge_resultset
Array#methods:
  &    []=      clear        cycle       drop_while        fill        frozen?       inspect  permutation         push                  reverse       select     slice!      third                          to_gsl_integration_qaws_table        to_qaws_table  unshift
  *    abbrev   collect      dclone      each              find_index  grep          join     place               rassoc                reverse!      select!    sort        to                             to_gsl_vector                        to_query       values_at
  +    append   collect!     deep_dup    each_index        first       hash          keep_if  pop                 recode_repeated       reverse_each  shelljoin  sort!       to_a                           to_gslv                              to_s           zip
  -    as_json  combination  delete      empty?            flatten     in_groups     last     prefix              reject                rindex        shift      sort_by!    to_ary                         to_gv                                to_sentence    |
  <<   assoc    compact      delete_at   eql?              flatten!    in_groups_of  length   prepend             reject!               rotate        shuffle    split       to_csv                         to_h                                 to_xml
  <=>  at       compact!     delete_eql  extract_options!  forty_two   include?      map      pretty_print        repeated_combination  rotate!       shuffle!   suffix      to_default_s                   to_json                              transpose
  ==   blank?   concat       delete_if   fetch             fourth      index         map!     pretty_print_cycle  repeated_permutation  sample        size       take        to_formatted_s                 to_json_with_active_support_encoder  uniq
  []   bsearch  count        drop        fifth             from        insert        pack     product             replace               second        slice      take_while  to_gsl_integration_qawo_table  to_param                             uniq!
self.methods: __pry__
locals: _  __  _dir_  _ex_  _file_  _in_  _out_  _pry_

It’s worth noting that you can see the modules declaring the methods of the object.

To see more of what pry can do for you, simply type help at the command line.

My ~/.pryrc file

Create a file in your home directory called ~/.pryrc.

2014 Golden Gate Ruby Conference: Top 10 Reasons to Attend

Woo hoo! I’m going to the 2014 Golden Gate Ruby Conference. It’s at UCSF Mission Bay, San Francisco, September 19-20, 2014. I wrote an article about my experience last year, GoGaRuCo 2013: Community > Code. If you’re on the fence about attending, here’s my top reasons on why you should consider attending. I recommend not delaying signing up, as last year I saw folks begging for tickets once the conference sold out. According to Leah Silber, one of the conference organizers, GoGaRuCo has sold out every year, except for maybe year one.

Top 10 Reasons To Attend GoGaRuCo

  1. San Francisco is a great town to visit, and there’s no better month to visit than September as dense fog is least likely!
  2. It’s relatively small conference, compared to Rails Conf, and I find that much more engaging and relaxing. The attendees seem to be a mix of highly passionate Rubyists, mostly locals, with a mix from around the world.
  3. A one track conference is nice in that you don’t have to worry about picking which talks to attend.
  4. There’s a 15 minute break between each talk to socialize with fellow attendees or speakers. Socializing is why you come to these talks!
  5. Yehuda will likely come up with some interesting talk!
  6. Ruby programming is really more of an art and passion than work, and the people that attend GoGaRuCo reflect this!
  7. You’ll probably make a few new friends and leave inspired.
  8. The food is super, both at the conference and throughout the city. And the evening events last year were great as well.
  9. There’s probably going to be a job board, just in case that interests you.
  10. You won’t need any more T-shirts for another year!

Photography

I’m volunteering as the official photographer of GoGaRuCo. My mission is to “get 2-3 good shots of each speaker, a couple of audience shots during each days lunch and breaks, a shot or two of each exhibitor table, 2-3 team photos, and a smattering of everything else.” So please don’t be shy and ask to have your photograph taken.

Here’s a sample of shots I took at GoGaRuCo 2013. Tons more photos are linked here: GoGaRuCo 2013: Community > Code.

Available for Consulting

If you’d like to meet me around the time of GoGaRuCo, don’t hesitate to email me to try to meet up in person. Possibly you might have a project that could use my help?

On a personal note, I spent the better part of my adulthood in San Francisco, so I’ve got tons of friends there. All my consulting clients tend to be from the Bay Area as well.

Remote Pair Programming Tips Using RubyMine and Screenhero

I had the opportunity to spend the entire workday remote pair programming from my office in Maui with a San Francisco client from Cloud City Development. We used our normal tools of RubyMine, Chrome, and iTerm2 on a 27” Cinema Display shared via Screenhero. While remote will probably never be 100% as good as true in-person pairing, it’s getting very close! Here’s some tips for effective remote pair programming. Scroll down to the bottom for the TLDR if you’re short on time. Overall, I would highly recommend remote pairing with RubyMine on a full 27” Cinema Display, using an iPad with a Google Hangout for eye contact!

Here’s a very detailed video of how to do remote collaboration:

Telepresence Using Video Chat on iPad

Per the recommendation of Tim Connor of Cloud City Development, I started using an iPad for telepresence for only the video, using Google Hangouts, muting the microphone on the Hangout, and using the audio on Screenhero. While one can run Google Hangouts on the laptop, it can really suck up the CPU. Note, an iPhone or probably an Android phone or table would work equally as well. In terms of the audio, the microphone and speakers are better on the computer. If one is using the laptop for the telepresence video, and using multiple screens, it’s key to use the camera on the screen where one will be looking at the Hangout, and not at the Screenhero screen. As shown from the below pictures, it’s key that it’s obvious when the pairing partners are looking at each other versus at Screenhero. Incidentally, Screenhero did not suffer from any degradation when combined with the Google Hangout, regardless of using the Hangout on the laptop or mobile device.

In the below images, note where our eyes are focused.

Talking to each other, making eye contact

Both looking at screen

Talking to each other, making eye contact

Shaka from Steve and Justin

Screenhero

We both used Screenhero on Macs. I’ve done plenty of remote pair programming using Google Hangouts, but typically only one person sharing the screen drives the code. Screenhero allows true screen sharing such that both programmers can do the typing and mousing. With the shared screen being a 27” Cinema display, I set my Screenhero window to full screen and the resolution was nearly perfect. Yes, when scrolling and switching apps, there is a slight delay, but it was extremely manageable to the point that I almost would forget that I’m working on a computer 3000 miles away. Although there’s a slight slowness in seeing keys that you type, it’s minor enough that it’s not a nuisance. The dual cursor support works great. Here’s a video demo of the dual cursor support.

RubyMine IDE

Both I and my pairing partners were already using RubyMine, so using RubyMine was a natural choice over trying to pair with the conventional remote pairing setup of tmux and Vim. RubyMine combined with Screenhero, the same size big screens, fast computers, and very good broadband resulted in a productive pairing setup. One thing I hear about Vim setups is that pair programmers tend to not customize their Vim keymaps. With RubyMine, that’s not an issue thanks to a feature called “Quick Switch Scheme” which allows very fast switching of keyboard bindings. I’m a Vim user (IdeaVim), and I would have been crippled without my favorite RubyMine Vim bindings. I like the “Quick Switch” feature so much that I made a short screencast on this feature, displayed below.

RailsConf 2014

My Talk: Concerns, Decorators, Presenters, Service Objects, Helpers, Help me Decide

(Lack of) Live Coding in my Talk

Due to time constraints, I chose to skip the live coding I had prepared to do in my talk. Please let me know if you’d be interested in a screencast walking through the sample code. I will create one if there is sufficient demand.