Page specific Javascript in Rails 3

Premise

One of the neat features from Rails 3.1 and up is the asset pipeline:

The asset pipeline provides a framework to concatenate and minify or compress JavaScript and CSS assets. It also adds the ability to write these assets in other languages such as CoffeeScript, Sass and ERB.

This means that in production, you will have one big Javascript file and also one big CSS file. This reduces the number of request the browser has to make and generally loads the page faster.

In the case of Javascript concatenation however, it does bring about a problem. Executing code when the DOM has loaded is commonplace in most web applications today, but if everything is included in one big file, and more importantly the same file, for all actions on all controllers, how do you run code that is specific to just a single view?

Solution(s)

Obviously there is more than one way of solving this problem, and rather unlike Rails, there doesn’t seem to be any “best practice” dictated. The closest I found is this excerpt from section 2 of the Rails Guide about the Asset Pipeline:

You should put any JavaScript or CSS unique to a controller inside their respective asset files, as these files can then be loaded just for these controllers with lines such as <%= javascript_include_tag params[:controller] %> or <%= stylesheet_link_tag params[:controller] %>.

And it isn’t even followed by an example, which seems more of an indication, that this isn’t something you should do at all.

Let’s start by this example nonetheless.

Per controller inclusion

By default Rails has only one top level Javascript manifest file, namely app/assets/javascripts/application.js which has the following content:

// This is a manifest file that'll be compiled into including all the files listed below.
// Add new JavaScript/Coffee code in separate files in this directory and they'll automatically
// be included in the compiled file accessible from http://example.com/assets/application.js
// It's not advisable to add code directly here, but if you do, it'll appear at the bottom of the
// the compiled file.
//
//= require jquery
//= require jquery_ujs
//= require_tree .

And this is included in the default layout with:

<%= javascript_include_tag "application" %>

N.B. When testing production on localhost, with e.g. rails s -e production, rails by default wont serve static assets, which application.js becomes after pre-compilation, so to avoid any problems when locally testing production, the following setting needs to be changed from false to true in config/environments/production.rb:

# Disable Rails's static asset server (Apache or nginx will already do this)
config.serve_static_assets = true

Now let’s say we have a controller, let’s call it ApplesController, and its corresponding Coffescript file, apples.js.coffee. We might try to include it as per the Rails Guide suggestion like so:

<%= javascript_include_tag params[:controller] %>

And this will work just fine in development mode, but in production produce the following error:

ActionView::Template::Error (apples.js isn't precompiled):

To remedy this, we need to do a couple of things. First off we should remove the require_tree . part from application.js, so we don’t wind up including the same script twice. Just removing the equal sign is enough:

//  require_tree .

To avoid a name clash rename apples.js.coffee to something else, e.g. apples.controller.js.coffee. Then create a new manifest file named apples.js, which includes your coffeescript file:

//= require apples.controller

Lastly, the default configuration of Rails only includes and pre-compiles application.js, so we need to tell the pre-compiler to now also include apples.js. This is also in config/environments/production.rb. Uncomment the following setting, and change search.js to apples.js:

# Precompile additional assets (application.js, application.css, and all non-JS/CSS are already added)
config.assets.precompile += %w( apples.js )

Note that this is a match, so it could also be something like '*.js' in case you have more manifests, which would be the case for per controller inclusion.

Views

The same concept as above could be extended to target individual actions/views of each controller, by having the actions be part of the manifest name. Individual javascript files could then be included like so:

<%= javascript_include_tag "#{params[:controller]}.#{params[:action}" %>

This makes an assumption that all actions on all controllers have a dedicated Javascript file. An assumption which most likely won’t be true in most cases. Another option could be an conditional include like so:

<%= yield :action_specific_js if content_for?(:action_specific_js %>

And then move the include tag to the specific views that need it.

Testing for existence of a page element or class

The DOM loaded event handler could look something like this:

jQuery ->
  if $('#some_element').length > 0
    // Do some stuff here

This could also be a class on body eg.:

jQuery ->
  if $('body.controller_name_action_name').length > 0
    // Do some stuff here

And then then the erb would be like this:

<!DOCTYPE html>
<html>
<head>
  <title>AppName</title>
  <%= stylesheet_link_tag    "application" %>
  <%= javascript_include_tag "application" %>
  <%= csrf_meta_tags %>
</head>
<body class="<%= "#{params[:controller]}_#{params[:action]}" %>">
 
<%= yield %>
 
</body>
</html>

Function encapsulation and on-page triggering

Instead of registering the handlers for DOM loaded, wrap the necessary code in a function that can be called later and then trigger that function directly in the respective view.

There is one thing we need to consider though. All Coffeescript sources for each controller get wrapped in it’s own closed scope, i.e this Coffeescript in apples.js.coffee:

apples_index = ->
  console.log("Hello! Yes, this is Apples.")

Becomes:

((
  function(){
    var a;
    a=function(){
        return console.log("Hello! Yes, this is Apples.")
    }
  }
)).call(this);

So in order for us to have a globally callable function, we must first expose it somehow. We can do this by attaching the function to the window object. Changing the above code like so:

window.exports ||= {}
window.exports.apples_index = ->
  console.log("Hello! Yes, this is Apples.")

If we insert this line in application.html.erb layout just before the closing body tag:

<!DOCTYPE html>                                                                           
<html>                                                                                    
<head>                                                                                    
  <title>AppName</title>                                                                   
  <%= stylesheet_link_tag    "application" %>                                             
  <%= javascript_include_tag "application" %>                                             
  <%= csrf_meta_tags %>                                                                   
</head>                                                                                   
<body>                                                                                    
 
<%= yield %>                                                                              
 
<%= yield :action_specific_js if content_for?(:action_specific_js) %>                     
</body>                                                                                   
</html>

We can now call the exposed function directly from our view like so:

<% content_for :action_specific_js do %>
<script type="text/javascript" language="javascript">
  $(function() { window.exports.apples_index(); });
</script>
<% end %>

Wrap up

Neither of these three examples is a “one-fit-all” solution I would say. Dividing up the Javascript source will start to make sense as soon as the Javascript codebase grows past a certain size. It might be interesting to test out, just how big that size is on a certain bandwidth, but I think that’s out of the scope for this post.

Given the fact that there isn’t really a defined best practice yet, perhaps the ruby community will come up with something better than the examples I presented here. In my opinion I think this is definitely something that could be better thought out.

Retro gaming part 1 – The Elder Scrolls: Arena

Why

These days, The Elder Scrolls V: Skyrim is all the rave, and it does look like an awesome game. I’ve briefly tried out Morrowind back in the day, but not enough to really remember anything from it.

I’ve decided that I want to try Skyrim at some point. I’m perfectly sure it is a complete standalone installment in the series, but then again… Why not get familiar with the game-world by playing it’s predecessors first?

  • A proper waste of time? ✓
  • An endeavor that will take months and months, if only playing a little now and then? ✓
  • Great fun? ✓

With this in mind, I better get crackin’.

The games

Bethesda Softworks has released the first two Elder Scrolls games as free downloads from their website. They are both targeted for MS-DOS so to play them on my mac, they have to run in DOSBox. So far I’ve only gone ahead with Arena, but it seems to run without error on DOSBox.

Cover art from Wikipedia.

Playing

Getting over the grueling 17 year old graphics and interface takes a bit of patience and playing the game without a manual or other instructions does provide some initial frustrations. Luckily some guy was awesome enough to put this online: The Elder Scrolls Arena Player’s Guide, so it wasn’t entirely impossible to figure out, how to get started.

From what I’ve gathered so far, the main quest revolves around finding eight pieces of The Staff of Chaos. When found the staff can be resembled and used to defeat the evil wizard who has taken the kings place, and bring back the true kind from another dimension. Good stuff.

So far I’ve only found the first two pieces, but I’m firmly determined to play the game through. No matter if it will take close to – forever.

Let’s see if I ever get to Skyrim.

Latest tinkerings – simply-json

Intro

Working with JSON data it is sometimes necessary to visualise it in a human readable way. Since we care about the number of bytes we send to the browser, JSON is usually stripped of any kind of unnecessary whitespace.

Good for size, bad for readability.

Doing a quick search reveals that there are already plenty of options for online formatters. Here’s just a few examples:

For their intended purpose, they all work – Trying them out though, I started to think of how it could be done in a simpler way and that it might be a fun little project, to try out on my own.

Application

For the lack of a better name, as the post title suggests, I decided to call it simply-json, feel free to suggest a more fitting name.

I wanted a minimal feature set, mostly what all the other formatters also provide:

  • Input a URL which points to some JSON data. Have it fetched and then formatted.
  • Input raw JSON and have it formatted.

I also included some design criteria, to formalise my idea of what “simpler” is:

  • No page noise. Content that isn’t relevant to solving the task at hand, should be kept to an absolute minimum.
  • No buttons. Why should I have to click a button, when the browser is perfectly capable of detecting when I’ve input some text in a textfield?

Supplemental features:

  • Highlighting of matching brackets. A feature I like, that some editors have – When moving the cursor over a bracket, the matching opening or closing bracket is indicated.
  • Collapsible regions. Clicking a bracket should collapse the content. Also an editor feature.
  • Loading indication when fetching data via URL. (http://www.ajaxload.info/)
  • Unobtrusive error indication. Borders around text boxes goes red on error.

Considerations

Due the to the same origin policy enforced by most browsers, it’s not immediately possible to request JSON from a different domain than the current one. To do it, some form of proxy is needed.

An easy, fire-and-forget solution would be to use Yahoo! Query Language. The problem with YQL though, is that it transforms the JSON into XML and, if requested as JSON, transforms it back again to JSON. This transformation is lossy, which means e.g. numbers are sent back as strings. {"number":42} => {"number":"42"}. According to the docs:

To prevent this “lossy” transformation, you append the query string parameter jsonCompat=new to the YQL Web Service URL that you are using.

At the time of writing, testing this does in fact reveal, that the loss in number transformation is fixed. What hasn’t been fixed yet though is null values, which is returned as "null" strings instead.

So much for YQL.

Custom proxy

Keeping things in the spirit of simplicity, a service that can proxy a GET request, shouldn’t be more than a few lines of code.

To that end, I chose to use Sinatra, which I’ve had good experiences with in the past. It really is an awesome lightweight web framework. Using Sinatra, this is all it takes:

require 'rubygems'
require 'sinatra'
require 'net/https'
 
get '/' do
  uri = URI(URI.encode(params[:uri]))
  https_session = Net::HTTP.new(uri.host, uri.port)
  https_session.use_ssl = true if uri.port == 443
  https_session.start
  response = https_session.get(uri.path + "?" + uri.query)
  response.body
end

N.B. Updated Feb 10, 2012. New version includes uri query part.

I specifically chose 'net/https' so https sources were also supported, it only adds one extra line and can handle plain http just as well.

The proxy should be very straightforward, assuming the endpoint is at thelabs/simply-json/proxy, making a request to "~thelabs/simply-json/proxy?uri=http://google.com" should be the equivalent of a request directly to "http://google.com".

The immediate problem though, is that the current web server, Apache, is already running on port 80 and the same origin policy even blocks requests to different ports on the same domain.

Fortunately, this can be solved with some config magic server-side.

Apache

Firing up the Sinatra service on localhost:6789, we need Apache to redirect traffic to the /proxy path, back to the service instead of trying to serve content from that directory. This is what is needed in the config, in my case in the virtual host for éncoder.dk:

<Proxy *>
Order deny,allow
Allow from all
</Proxy>
 
ProxyPass /thelabs/simply-json/proxy http://localhost:6789
ProxyPassReverse /thelabs/simply-json/proxy http://localhost:6789

In case Apache hasn’t loaded mod_proxy, enabling it can be done with this command:

$ a2enmod proxy proxy_http

That’s all

Working demo can be found here:

http://éncoder.dk/thelabs/simply-json/

Go check it out.

Speeding up with parallel compression – pbzip2

Premise

Today I found myself in need of archiving some virtual machines, which is quite often rather large. The actual machine I was working on was a 4 core, 8 with HT, Xeon powerhouse and I was curious to see if there was any way to speed up compression times for this particular task.

Looking into things

Usually I always grab for my trusty old friend tar when creating archives and it does get the job done well. The thing about tar though, is that it is inherently single-threaded, so it doesn’t really matter how many CPU cores you throw at it.

After digging around a bit I found pbzip2. Description:

pbzip2 is a parallel implementation of the bzip2 block-sorting file compressor that uses pthreads and achieves near-linear speedup on SMP machines.

Sounds good right? I decided to try i out and measure the results. The size of the virtual machine was about 11G:

~$ du -hs *
11G     WinXP_32Bit

With just plain old tar it took about eight minutes:

~$ time tar zcvf winxp.tar.gz WinXP_32Bit
 
real    8m18.583s
user    6m47.089s
sys     0m15.129s

Not bad, but that is nothing compared to piping it through pbzip2:

~$ time tar -c WinXP_32Bit | pbzip2 -c > winxp.tar.bz2
 
real    4m54.942s
user    38m22.452s
sys     0m25.022s

Screenshots of htop to show the difference in cpu core utilisation:

Both resulting archives were of equal size, so the immediate benefit is purely speed:

~$ du -hs *
11G     WinXP_32Bit
6.2G    winxp.tar.bz2
6.2G    winxp.tar.gz

For good measure, I also timed the decompression speeds. Though there was still a gain in speed, it was not quite as significant as with compression:

~$ time tar zxvf winxp.tar.gz
 
real    5m8.636s
user    1m20.061s
sys     0m20.413s
~$ time pbzip2 -d winxp.tar.bz2
&lt;
real    4m32.329s
user    13m15.814s
sys     0m19.057s

Some things might be said for a lot of other limiting factors such as disk read/write speed etc. Playing around with different settings of pbzip2 might also reveal greater performance boosts than this simple example, but by standard, it is now a welcome addition to my *nix toolkit.

Launching éDoist – Todoist client for Symbian

Sometime last week I released my latest hobby project on the Ovi store. It is a QtQtuick application targeted at Symbian^3 based Nokia smart-phones.

For now it only features capabilites of a free account, since that is what I have myself.

Also worth noting, is that it is an online-only application at the moment. Adding offline capabilities to a future version is on my todo list however.

Support site with more screenshots: http://éncoder.dk/édoist
Ovi Store content page: http://store.ovi.com/content/183543

Update 09/08

The application is now open source under an MIT license. The source code is available at github: https://github.com/rhardih/edoist/