Gerard Condon's Blog

Journal of a software developer.

Upgrading to Rails 4.2 on OpenShift

I updated my Rails test app from 4.1 to 4.2. However when I pushed to OpenShift I got the following error

You have already activated rack 1.5.2, but your Gemfile requires rack 1.6.0. 
Using bundle exec may solve this. (Gem::LoadError)

I found the answer on Google Groups here. The root cause is that OpenShift is dependent on Rack 1.5.2 and Passenger 4.0.18. The proper fix will have to wait until they update their versions of that software. Until then to fix this error, ssh into the OpenShift app and in the app-root folder run

gem install rack

Publishing an Application to OpenShift

I was doing some sample Rails apps recently and was looking for a place to run them. Heroku would have been my first port of call but given the limits on database size for the free account, I looked around to see what else was out there. I ended up on OpenShift. The free account gives 3 gears (or VMs) with 1GB storage on each. This is better for me as I can run a proper database on it without the 10000 row limit. The 1GB storage is also persistent so you can use it to store assets.

Redhat has an rhc gem which allows you to control your gears from the command line. You can create new apps from here or you can do as I did and create them from the OpenShift web page. They have a large list of pre-configured applications covering languages such as Java, Ruby, Python and frameworks like Node and Rails.

I selected the Rails 4 application. This forks the Rails 4 example repository from Github. The name you give your application forms the basis for its url i.e. appname-username.rhcloud.com. You can choose a database - the options for Rails are MySql and Postgres. This creates a blank Rails application - to add OpenShift support to an existing application you can follow the steps here.

After a short wait a screen pops up with the database credentials and instructions on how to clone the application to a local git repository.

After I clone the repository I typically change the configuration. I want to also store the application code on Github, so I rename the origin repository to openshift.

 git remote rename origin openshift

Then I add a new origin remote using

git remote add origin git@github.com:gerardcondon/rails-app.git

and I do an initial push to this remote

git push origin

The current master branch is still tracking openshift/master so now I change this to track origin/master using

git branch master -u origin/master

Now by default changes are pushed to Github when I run git push. When I want to deploy to OpenShift I have to manually specify it using

git push openshift

After the push, Openshift will stop the application, perform any db migrations and restart it again. There is also support for adding in your own operations at various stages in the deployment.

If I want to clone the repository on a new machine then I clone from Github and add an OpenShift remote using

git remote add openshift ssh://<<openshift repository ssh url>>

The repository’s ssh url is available from the application’s dashboard on OpenShift.

Linx 7 Standby Battery Life Fix

One issue I had with my Linx 7 was that the battery was draining really quickly in standby mode. I was having to shut down the tablet completely between uses. I found this page on a Linx forum site which recommended upgrading the wifi driver. I installed the driver and now the standby battery life is excellent.

Mini Linx 7 Review

I recently bought a Linx 7 tablet for 70 euro. For the price it’s a great little tablet. It comes with a full version of Windows 8 and a year’s subscription to Office 365. It’s great for Youtube/videos, Twitter and internet browsing. It’s cheap enough, compared to a four or five hundred euro iPad, that I can leave the two year old watch his favourite videos on it without worrying too much. Some parts of the hardware are not up to scratch - the camera is terrible and the headphone jack has static noise - but for the price it’s fine.

One of the best features of the tablet compared to iOS is that it comes with a micro-SD card slot. That means that it is really easy to copy over files from my desktop. I recently tried to load up my iPad with photos and vidoes to show the in-laws. Even though the videos were shot on another iOS device, it was a nightmare to get them on the iPad - a convoluted process involving re-encoding and syncing through iTunes. On the Linx, I just copied the original files to the SD card in Windows, plugged the card in into the slot and was able to watch them immediately.

In fact that original iPad is really only used now as an ereader. I think the usefulness of the applications on iOS has been going down recently compared to other platforms. Quality-wise they’re absolutely fine but they’re fairly shallow compared to what they could be. The race to the bottom app-store economy and the limitations that iOS puts on inter-app communications severely limits the type of apps being written.

I play a good bit of chess and the apps on iOS are nowhere near the quality of those on Windows. And apparently also those on Android. However with the Linx I can run any Windows chess app. Even Chessbase runs fine on it, which gives access to a vast library of Chessbase Fritz Trainer videos. I can open pgn files in multiple different applications and copy and paste data between them. I can use source control to version my notes. Also on Windows, developers are able to charge proper rates for their applications, thus ensuring a robust marketplace for software.

The problem with using Windows applications on the Linx is that the UI is really awkward to use with just your fingers. However to me, this is a small price to pay for being able to use these apps at all on a tablet. Also you can use Bluetooth mice and keyboards with the Linx so at least you have some options.

Update If you have a Linx tablet then you should upgrade the wifi drivers to fix the battery life.

Ruby Command Line Input Using Highline

I’ve been using Rakefiles a lot recently to automate tasks. I find it really useful in comparison to shell scripting as I can run the rakefiles under different OSes (OS X and Windows) and have it behave the same in all cases.

I was building a simple script to automate the generation of pgn files for my chess games. I wanted to be able to enter the details of the games on the command line and then have my script output a pgn file.

I didn’t want to have to go messing about with low level operations such as puts and gets so I searched for something better. I found the Highline gem for this and was really happy with it.

It allows for a variety of input formats

  • To display a prompt to the user and then store the input in a variable use event = ask("Event Title: ")

  • You can specify default values and the default will be listed as part of the prompt. On the command line simply hit enter to get the default value. timeControl = ask("TimeControl: ") { |q| q.default = "5400+15" }

  • You can create a menu from an array of values using the choose function. result = choose("1-0", "1/2-1/2", "0-1", "*")

These can be customised and there are a lot more options available such as entering passwords. If you are doing text input in Ruby, I would advise checking it out.

Open Plan Offices and High Tech Architecture

Jeremy Paxman recently wrote an article criticising open plan offices. They have been a bugbear of mine for a while also. My working life has been spent in open offices or cubicles (never hot desking thank God) and they’re terrible compared to proper offices. Background noise, air-con issues, lack of privacy and personal space are just some of the issues.

Paxman’s article put me in mind of a TV series which featured a lot of open plan offices. This was the Brits who Built the Modern World series on the BBC which detailed the work of the architects such as Norman Foster and Richard Rogers, who were the pioneers of High Tech Architecture. It’s an excellent series and I highly recommend it. The buildings shown typically had a fantastic exterior with really distinctive features. I was really impressed with the level of quality and inventiveness that went into these structures.

However the interiors of these buildings were typically vast open office spaces. The inventiveness that characterized the outsides, had completely vanished when it came to fitting out the inside. Bog-standard, modular office furniture. was the norm. One of the best examples was Norman Foster’s Willis Building in Ipswich. This has a stunning exterior of dark glass panels and a rooftop garden, all combined with a soul-destroying, open plan interior.

The really odd thing for me was that the architects really bought into the open office ideals. They truly thought that this was the best way to design a workplace. I would have loved to have seen what they could have done, if they had put the effort in to design proper working spaces which combined private, focused space along with collaborative areas. They thought they were designing workplaces which were more efficient and collaborative, but to my mind all they succeeded in doing was creating an environment where everyone is distracted and disrupted most the time. It’s a real pity and a waste of their talents.

PS Let’s hope no-one ever interviewing me for a job in an open plan office reads this :)

Book Review: Beginning Backbone by James Sugrue

My current project at work is a large scale Backbone application. The company had no prior experience in web programming before this project and was mainly used to programming in Java. So in order to staff up the project, we needed some way of converting our Java programmers into Backbone programmers.

To do this we looked at the various training materials available on the web. There are a number of excellent resources out there, such as Addy Osmani’s book, the MVC Todo app and the Backbone docs themselves. We wanted to develop a selection of documentation/training materials that we could hand to a new member of the team to get them up to speed.

One of the newer books that we’ve looked at is Beginning Backbone by James Sugrue. Disclosure: I’ve previously worked on the same team as James for two years at my current company.

JavaScript and Backbone Introduction

The book begins with a good introduction and overview of Backbone from an architectural point of view and gives examples of companies who have built products on Backbone. I liked this approach, as it’s one thing explaining why you should use Backbone from a coding perspective but it’s also nice to be able to justify the choice from a risk perspective to management. Having concrete examples of successful companies helps us make that case.

There is a chapter on JavaScript which is probably obligatory in a book like this. It’s fine as an introduction to the language, but you would need to combine this with something specifically for JavaScript like “Eloquent JavaScript” or “JavaScript: The Good Parts”.

Each of the components of Backbone is dealt with comprehensively. The models, collections, views, events and router are explained with plenty of examples. Templating described alongside the views using both Handlebars and Mustache.

After the introduction we get a walkthrough of how to create an application. The application is surprisingly comprehensive. It’s a Twitter clone and not the standard todo app. It deals with linking models to views, reusing views, and how to tie it together with events.

Backbone EcoSystem

From there the book branches out to cover the wider Backbone ecosystem. Backbone is not an all encompassing framework. In fact it quite a simple framework with a lot of scope for customisation. It is a foundation upon which you will layer many plugins and libraries, and so understanding what additional addons are available and how to use them is vital to getting the most out of Backbone.

The book covers

We had started coding well before the book was written and a lot of the choices we had made on Backbone plugins are mentioned in the book. It was nice to get some validation of those choices. In addition reading this section of the book prompted us to look at introducing view models to our code.

One of the problems we had was that it’s easy to see how a simple todo application can be built from Backbone, but it’s harder to extrapolate from there and design how a larger application should hang together. We encountered problems at scale e.g. managing views and their resources when having high double digit numbers of views and templates. The book introduces two plugins - Marionette and Thorax - which extend Backbone to give more comprehensive view management. Even if the specific plugins described in the book are not for you, at least you will be made aware of the issues that await in the future.

As an aside, it is here that the book encounters one of the curses of JavaScript programming - the choice of two equally plausible alternatives! This has been the bane of my life for the past couple of years. For every situation that you come across, there will be two equally valid options. You will have to make a choice between them but you won’t have enough information at the time to understand their pros and cons. Murphy’s law dictates that of course you will pick the wrong one! You have to choose and second guess yourself for the rest of the project!

The specific JavaScript cases in the book are Marionette/Thorax and QUnit/Jasmine. This isn’t just limited to JavaScript. For example, in Rails you have the choice between the omakase and prime stacks. It’s beyond the scope of the book to give definitive answers on which to choose. You need to evaluate the options based on your own situation but I think the book gives a good enough head start.

Building a JavaScript application - TDD, Build Systems

The book is not just about beginning Backbone programming. It is much more than that. It gives you a solid base from which to start developing JavaScript applications. Topics such as testing and automation, building, code management are dealt with. The benefits of TDD are explained along with an introduction to two of the most popular JavaScript TDD frameworks - QUnit and Jasmine.

The book also has a chapter on best practices & design patterns. The emphasis is not just about using Backbone but using it well. It covers user visible features such as performance and memory leaks as well as development concerns such as creating and maintaining a manageable code base. For example, JavaScript modularity is not straightforward. The JavaScript language does not provide for a way for files to include other files. As a programmer you definitely want to split your codebase into separate files and then compile them together for the released product. The book uses RequireJS to show you how to do this.

Negatives

The formatting of the code samples is off in quite a few places. The indentation is out and there are some spaces missing turning var myarray into varmyarray. These are more than just code formatting errors - these would lead to compile errors in the code. There is a Github repository of the code samples in the book though, which partially makes up for this.

From a personal perspective, I don’t like the Grunt approach to build management so I wasn’t too keen on the whole chapter devoted to this. We tried Grunt on our project at work and found that as as the number of build steps increases, the json required to configure Grunt becomes more and more complex. I prefer using code over configuration files as then I have a chance to debug the build process, insert print statements etc. I think there must be better tools out there. Stepping away from JavaScript and using Ruby, there is Rake, which is what we use on our project. If you are using Rails then you have the Asset Pipeline approach. I found that Grunt was hard to debug and it was not easy to figure out what went wrong for some step in the middle.

Summary

Overall I would recommend this book. I think its invaluable for ramping up new developers to a Backbone project. Also from an experienced programmer’s perspective, it is an easy and quick way to gain a broad understanding of the Backbone landscape. It introduces a number of topics, not just Backbone, but also JavaScript development in general. The book promotes a professional and structured approach to software development, making it suitable for a team who are beginning web development and want to get their process set up correctly.

Working With Typescript

For the past year and a half, my team at work have been using TypeScript to implement a large single page application in Backbone. We’re over three quarters of the way through the project, closing in our our first release and here are my thoughts on using TypeScript to date.

This is my first major foray into web development, so it also required ramping up on HTML, CSS and REST. Previously I had used JavaScript to implement the client side of a websocket API, but that project was non-GUI work.

Reasons for choosing TypeScript

The company has settled on TypeScript for web programming due to a number of factors.

  • Firstly is the additional security/peace of mind provided by type checking. For example, this prevents a lot of mistakes in calling functions with the wrong parameters. It makes some refactorings easier, as the compiler can tell you when you’re calling functions that no longer exist or are passing the wrong types to a function.

  • TypeScript adds classical Object-Oriented constructs to JavaScript e.g. interfaces, classes with inheritance. Rather than having to chose a library to implement inheritance, it is instead a first class language feature. I find this, along with having a proper super keyword to be much more usable in practice than prototypical inheritance.

    A nice feature is that TypeScript has support for implicit interfaces. The compiler will figure out if a class implements an interface rather than requiring that the class declares a list of implements X clauses in its definition. This reduces the friction of dealing with the type system.

  • TypeScript is compatible with JavaScript so any library out there can be used with our code with no problems.

  • Better tooling. The idea behind this was that, given that TypeScript has a proper type system, this would allow better tooling such as Intellisense. The theory was that programming in TypeScript would be a better experience because the IDE would be better.

    As an aside, I would question the value of Intellisense and the type of code it leads to. Take Java for example. When you combine Intellisense with modern IDEs’ ability to automatically import files, you greatly lower the barrier to coupling. It is no problem to include remote files, grab the inevitable Singleton instance, and execute large Law of Demeter busting method chains on them.

      GlobalSingletonReference.getInstance().getSomethingElse().andItsChild()
          .lawOfWhatExactlyNow().pleaseStopSoon().noTheresMore().invoke()
    

    I think Java tools have given the ability to create larger programs than can be properly maintained.

These features were seen as key to creating a more maintainable source code base especially at large scale.

My Experience with TypeScript

We had written a JavaScript prototype in Backbone and we ported that to TypeScript, so that we could compare and see how it went. I tried to use TypeScript as much as possible to be fair to the experiment. You can get away with basically writing JavaScript and passing it through the TypeScript compiler but that’s no good to anyone really.

I found it made my code look more like Java or C#. This was especially the case with class definitions. Defining a class hierarchy in JavaScript is terrible - needing to set the prototype to the parent’s prototype, manually defining super etc. The TypeScript version is very familiar to a Java or C# coder. Our group of TypeScript programmers were converted Java/C++ programmers so this was a huge bonus.

Having interfaces was great. They’re very useful in defining APIs and especially so for documenting external APIs. One thing I hate about JavaScript, is having to read documentation or readme files for third party libraries in order to find out their API. An interface definition in the language itself is far superior, as it is a lot more concise and guaranteed to be correct, having gone through the compiler.

In the end, the code had the same classes with the same class names but the class implementations were far more readable due to the OO nature of TypeScript and the ability to define and program to interfaces.

I did find that refactoring was easier - operations like adding additional parameters to functions were trivial compared to JavaScript. For the JavaScript code, I had to rely on my unit tests to assure me that my refactorings were correct but here I could offload a lot of those tests to the compiler.

When we started on TypeScript it was version 0.8. The compiler was a bit rough then and crashed on some invalid input rather than reporting errors. It has been steadily improved since then and version 1.0 is perfectly fine for us, reporting the correct errors for all the previously crashing cases. Also the language has been added to and improved over time.

Things I didn’t like about TypeScript

On the flip side there are a lot of things that I don’t really like about TypeScript. Some features of JavaScript e.g. different return types, can’t be represented in TypeScript. In these circumstances you find yourself using the “any” type - the equivalent of using Object in Java. The problem with this is that using it completely circumvents the type checker. Thus the more complex code ends up being the code with less type checking.

Other JavaScript features such as mixins have such horrible syntax in TypeScript (see here) that they’re basically unusable. Mixins in particular require repeated boilerplate code to get past the compiler. That was a pattern with a lot of the issues I had with TypeScript. As you try to do the more dynamic JavaScript stuff, you end up writing and repeating declarations to get the compiler off your back. Ideally there would be some way to say to the TypeScript compiler that we are going to implement this interface dynamically - the implementation may not be here now but it will be at runtime. We ended up generating a lot of this boilerplate code using Ruby and Erb (a topic for another blog post).

I tried debugging with source maps in Chrome but I wasn’t a fan of the experience. Breakpoints would always get shifted a few lines and it was hard to get them to break on a function. I was constantly wondering if I had the correct version rather than a cached sourcemap and did the TypeScript match up to the JavaScript. I ended up just using the compiled JavaScript for debugging.

Continuing with the last point, with some TypeScript features, you need to know what type of code was generated e.g. did a variable assignment get generated in the constructor or on the prototype. For example, this is necessary when integrating with Backbone. Instance variables in TypeScript are not defined on the prototype but instead in the constructor after the call to super. This means that they are not defined by the time the Backbone constructor is called. The Microsoft solution is to put the call to super in the middle of the constructor but this looks wrong to any Java programmer and I could see them inadvertently breaking the code by moving super to the top of the constructor.

TypeScript’s support for Generics was almost good but again there are some issues. The main one I ran into is that you can’t create a new instance of the generic type e.g. for a generic type T you can’t do var x = new T(). There are ways around this by passing in functions that create objects but the code they lead to is fairly bad.

The idea that types would lead to better tooling didn’t pan out for us. Taking IDEs first, I think the main tool that supports TypeScript is Visual Studio. There is also JetBrains’ WebStorm. As IDEs go these seem perfectly fine. It’s a bit hard for me to evaluate this as I’m not a fan of large IDEs. One issue with these is that, especially in the case of Visual Studio, they require large license fees. I don’t like criticising tools on cost issues, as I feel that companies should treat these as a required cost of hiring programmers. Unfortunately, a lot of companies don’t, so if I’m required to buy a personal license, I much prefer to buy a license for a tool like Sublime Text.

TypeScript files import other files by means of a reference path at the top of the file. This is almost like Java except unfortunately the compiler does not enforce these, thus requiring them to be manually maintained. This is impossible to get right for a large project. The only essential ones are those for your base classes but if you leave out the others then IDEs have problems locating type declarations. If you have extraneous references that are not technically needed then this can lead to the TypeScript compiler generating invalid code that defines subclasses before their parent classes. When run, these cause runtime exceptions. Not a great situation.

There aren’t a great number of TypeScript plugins for Sublime Text and there is no official one from Microsoft. Also there are no code quality tools such as linters. It’s not much use running the JavaScript versions as the only thing they can run on is the compiled code. The set of tools available for JavaScript is much larger and more mature. Even where you would think that having types would allow for newer tools e.g. static analysis or dependency graph generators, there is nothing.

From a language point of view I wonder if trying to make all valid JavaScript code be valid TypeScript code is harming them? Would they be better going for a more C# like language and mandating that any JavaScript should be in separate files? That’s what we ended up doing anyway - we didn’t want to mix our JavaScript and TypeScript codebase.

Integration with Third Party Code.

In order to use external JavaScript files in TypeScript, you must first create a definition file for the JavaScript API. This declares, in a manner similar to Java interfaces, the functions, classes and interfaces that the JavaScript code exposes. These files can be a pain to locate and maintain. There is a Github repository Definitely Typed which maintains a collection of .d.ts files for popular JavaScript libraries. These are typically of a high standard but we have had to add missing functions to some Backbone .d.ts files. If there is none online you have to write one yourself which can involve reverse engineering the API and types of the library.

I think there is a large risk in using these, given that they are neither maintained by the library owners in question or by Microsoft themselves. It is problematic to update the libraries as now you have to also update the .d.ts. files. Everything going well, the Definitely Typed version will be updated to the latest version but there are no guarantees. What happens if the maintainer of this repository gets fed up and stops updating the files?

However once they are found, these .d.ts files can be extremely useful. When working with websockets, the TypeScript lib.d.ts file was the best documentation I found on the subject. I think the interface/protocol concept is a great addition to any programming language. It is especially useful for documenting APIs and it harms Ruby and JavaScript not to have such a construct.

It can be a bit tricky to integrate your TypeScript code with existing JavaScript libraries. As outlined above with Backbone, some libraries need to have code that’s on the prototype so you need to know the code that TypeScript generates. Also where in the hierarchy does the library go? We found it best to have the JavaScript classes at the top of the inheritance tree and TypeScript in the subclasses.

Conclusions

One issue I’d have with TypeScript is trying to gauge Microsoft’s commitment to the language. Are they really in this for the long term? For example, the code samples on their website haven’t been updated in ages. Also how large can the TypeScript community get? Are there really going to be a critical mass of developers abandoning JavaScript for this - especially considering Microsoft’s past attitude to the web and Internet Explorer. The amount of bridges they must have burned is quite large at this stage. If I’d suffered for years working around IE6’s issues, the last thing I’d do is switch to Microsoft’s new web language.

Overall though, I think it was worthwhile for the company to use TypeScript. The pros outweigh the cons, especially once you identify the issues with TypeScript and develop coding standards to avoid them. As a developer I would have preferred CoffeeScript as a JavaScript replacement but I can see how it would be easier to shift Java developers over to TypeScript. I think its given them a lot of security that they wouldn’t have had with JavaScript.

Automating Jasmine Unit Tests

For the first cut at automating my JavaScript unit testing, I started running them from the command line via PhantomJS. PhantomJS is a headless browser so it will render my HTML & CSS and execute the JavaScript, but will not display it on the screen. The steps I followed were:

  • I installed phantomjs from here using homebrew brew install phantomjs.

  • I got the command for running Phantom.js here.

  • I found the default output from Phantom.js to be lacking in detail. I came across a good link here which shows how to add stack traces on failure and how to add colours to the output using a console reporter.

In future, I’d like to add this to a build system which will run jshint on my code and also do whatever minfication/optimizations are needed. It’s looking like Grunt is a good tool for this so will investigate that further.

The Design of iCloud

There’s been a lot of blog posts from developers recently about the problems with iCloud syncing. The Verge has a great summary here. There are quite a few who are removing iCloud from their products and going with other syncing options such as Dropbox.

The impression I get from these discussions is that it’s the reliability of iCloud that’s the problem, i.e. if iCloud was rock solid then it would be a great option for your app. I disagree with this view and I think the design of iCloud is fundamentally flawed.

I think that even if iCloud Database syncing was perfectly reliable, it would still be a bad way of syncing data. One conclusion I’ve drawn from looking at web backends and Rails in particular, is that iCloud is only useful if you want to stick to Apple devices. There is no way to get at this data outside of iOS or OS X. In particular it is impossible to access this from a web application. For this reason, I think it’s vital to have a proper backend if you are storing data in the cloud.

The other mode of iCloud syncing is the Document based syncing. The issue I have with this is that anything stored in iCloud is restricted to the application that created it. This is a major issue when an application stores data in a common file format (e.g. plain text or image formats such as PNG or JPEG) that you may expect to be able to use in another application. Dropbox is a far superior solution in this case. I feel much more confident in the apps which use this over iCloud, as I will always have access to the data files.

The other day, Brent Simmons posted a great proposal for an Apple backend service. They really need to do something here as their competitors aren’t standing still. Microsoft is on the right track here with Azure and similarly Amazon’s cloud computing services are going from strength to strength. It will be interesting to see if anything will be announced for iCloud at WWDC.