Articles tagged 'development'

Page 2 of 4

Previous page

    Publishing an application to OpenShift

    I was doing some sample Rails apps recently and was looking for a place to run them. Heroku would have been my first port of call but given the limits on database size for the free account, I looked around to see what else was out there. I ended up on OpenShift. The free account gives 3 gears (or VMs) with 1GB storage on each. This is better for me as I can run a proper database on it without the 10000 row limit. The 1GB storage is also persistent so you can use it to store assets.

    Redhat has an rhc gem which allows you to control your gears from the command line. You can create new apps from here or you can do as I did and create them from the OpenShift web page. They have a large list of pre-configured applications covering languages such as Java, Ruby, Python and frameworks like Node and Rails.

    I selected the Rails 4 application. This forks the Rails 4 example repository from Github. The name you give your application forms the basis for its url i.e. appname-username.rhcloud.com. You can choose a database - the options for Rails are MySql and Postgres. This creates a blank Rails application - to add OpenShift support to an existing application you can follow the steps here.

    After a short wait a screen pops up with the database credentials and instructions on how to clone the application to a local git repository.

    After I clone the repository I typically change the configuration. I want to also store the application code on Github, so I rename the origin repository to openshift.

    git remote rename origin openshift
    

    Then I add a new origin remote using

    git remote add origin git@github.com:gerardcondon/rails-app.git
    

    and I do an initial push to this remote

    git push origin
    

    The current master branch is still tracking openshift/master so now I change this to track origin/master using

    git branch master -u origin/master
    

    Now by default changes are pushed to Github when I run git push. When I want to deploy to OpenShift I have to manually specify it using

    git push openshift
    

    After the push, Openshift will stop the application, perform any db migrations and restart it again. There is also support for adding in your own operations at various stages in the deployment.

    If I want to clone the repository on a new machine then I clone from Github and add an OpenShift remote using

    git remote add openshift ssh://<<openshift repository ssh url>>
    

    The repository's ssh url is available from the application's dashboard on OpenShift.

    posted on June 7, 2015development


    Linx 7 Standby Battery Life Fix

    One issue I had with my Linx 7 was that the battery was draining really quickly in standby mode. I was having to shut down the tablet completely between uses. I found this page on a Linx forum site which recommended upgrading the wifi driver. I installed the driver and now the standby battery life is excellent.

    posted on June 1, 2015development


    Mini Linx 7 Review

    I recently bought a Linx 7 tablet for 70 euro. For the price it's a great little tablet. It comes with a full version of Windows 8 and a year's subscription to Office 365. It's great for Youtube/videos, Twitter and internet browsing. It's cheap enough, compared to a four or five hundred euro iPad, that I can leave the two year old watch his favourite videos on it without worrying too much. Some parts of the hardware are not up to scratch - the camera is terrible and the headphone jack has static noise - but for the price it's fine.

    One of the best features of the tablet compared to iOS is that it comes with a micro-SD card slot. That means that it is really easy to copy over files from my desktop. I recently tried to load up my iPad with photos and vidoes to show the in-laws. Even though the videos were shot on another iOS device, it was a nightmare to get them on the iPad - a convoluted process involving re-encoding and syncing through iTunes. On the Linx, I just copied the original files to the SD card in Windows, plugged the card in into the slot and was able to watch them immediately.

    In fact that original iPad is really only used now as an ereader. I think the usefulness of the applications on iOS has been going down recently compared to other platforms. Quality-wise they're absolutely fine but they're fairly shallow compared to what they could be. The race to the bottom app-store economy and the limitations that iOS puts on inter-app communications severely limits the type of apps being written.

    I play a good bit of chess and the apps on iOS are nowhere near the quality of those on Windows. And apparently also those on Android. However with the Linx I can run any Windows chess app. Even Chessbase runs fine on it, which gives access to a vast library of Chessbase Fritz Trainer videos. I can open pgn files in multiple different applications and copy and paste data between them. I can use source control to version my notes. Also on Windows, developers are able to charge proper rates for their applications, thus ensuring a robust marketplace for software.

    The problem with using Windows applications on the Linx is that the UI is really awkward to use with just your fingers. However to me, this is a small price to pay for being able to use these apps at all on a tablet. Also you can use Bluetooth mice and keyboards with the Linx so at least you have some options.

    Update If you have a Linx tablet then you should upgrade the wifi drivers to fix the battery life.

    posted on May 20, 2015development


    Ruby Command Line Input Using Highline

    I've been using Rakefiles a lot recently to automate tasks. I find it really useful in comparison to shell scripting as I can run the rakefiles under different OSes (OS X and Windows) and have it behave the same in all cases.

    I was building a simple script to automate the generation of pgn files for my chess games. I wanted to be able to enter the details of the games on the command line and then have my script output a pgn file.

    I didn't want to have to go messing about with low level operations such as puts and gets so I searched for something better. I found the Highline gem for this and was really happy with it.

    It allows for a variety of input formats

    • To display a prompt to the user and then store the input in a variable use event = ask("Event Title: ")

    • You can specify default values and the default will be listed as part of the prompt. On the command line simply hit enter to get the default value. timeControl = ask("TimeControl: ") { |q| q.default = "5400+15" }

    • You can create a menu from an array of values using the choose function. result = choose("1-0", "1/2-1/2", "0-1", "*")

    These can be customised and there are a lot more options available such as entering passwords. If you are doing text input in Ruby, I would advise checking it out.

    posted on April 14, 2015developmentchess


    Open Plan Offices and High Tech Architecture

    Jeremy Paxman recently wrote an article criticising open plan offices. They have been a bugbear of mine for a while also. My working life has been spent in open offices or cubicles (never hot desking thank God) and they're terrible compared to proper offices. Background noise, air-con issues, lack of privacy and personal space are just some of the issues.

    Paxman's article put me in mind of a TV series which featured a lot of open plan offices. This was the Brits who Built the Modern World series on the BBC which detailed the work of the architects such as Norman Foster and Richard Rogers, who were the pioneers of High Tech Architecture. It's an excellent series and I highly recommend it. The buildings shown typically had a fantastic exterior with really distinctive features. I was really impressed with the level of quality and inventiveness that went into these structures.

    However the interiors of these buildings were typically vast open office spaces. The inventiveness that characterized the outsides, had completely vanished when it came to fitting out the inside. Bog-standard, modular office furniture. was the norm. One of the best examples was Norman Foster's Willis Building in Ipswich. This has a stunning exterior of dark glass panels and a rooftop garden, all combined with a soul-destroying, open plan interior.

    The really odd thing for me was that the architects really bought into the open office ideals. They truly thought that this was the best way to design a workplace. I would have loved to have seen what they could have done, if they had put the effort in to design proper working spaces which combined private, focused space along with collaborative areas. They thought they were designing workplaces which were more efficient and collaborative, but to my mind all they succeeded in doing was creating an environment where everyone is distracted and disrupted most the time. It's a real pity and a waste of their talents.

    PS Let's hope no-one ever interviewing me for a job in an open plan office reads this :)

    posted on December 9, 2014development


    Working With Typescript

    For the past year and a half, my team at work have been using TypeScript to implement a large single page application in Backbone. We're over three quarters of the way through the project, closing in our our first release and here are my thoughts on using TypeScript to date.

    This is my first major foray into web development, so it also required ramping up on HTML, CSS and REST. Previously I had used JavaScript to implement the client side of a websocket API, but that project was non-GUI work.

    Reasons for choosing TypeScript

    The company has settled on TypeScript for web programming due to a number of factors.

    • Firstly is the additional security/peace of mind provided by type checking. For example, this prevents a lot of mistakes in calling functions with the wrong parameters. It makes some refactorings easier, as the compiler can tell you when you're calling functions that no longer exist or are passing the wrong types to a function.

    • TypeScript adds classical Object-Oriented constructs to JavaScript e.g. interfaces, classes with inheritance. Rather than having to chose a library to implement inheritance, it is instead a first class language feature. I find this, along with having a proper super keyword to be much more usable in practice than prototypical inheritance.

    A nice feature is that TypeScript has support for implicit interfaces. The compiler will figure out if a class implements an interface rather than requiring that the class declares a list of implements X clauses in its definition. This reduces the friction of dealing with the type system.

    • TypeScript is compatible with JavaScript so any library out there can be used with our code with no problems.

    • Better tooling. The idea behind this was that, given that TypeScript has a proper type system, this would allow better tooling such as Intellisense. The theory was that programming in TypeScript would be a better experience because the IDE would be better.

    As an aside, I would question the value of Intellisense and the type of code it leads to. Take Java for example. When you combine Intellisense with modern IDEs' ability to automatically import files, you greatly lower the barrier to coupling. It is no problem to include remote files, grab the inevitable Singleton instance, and execute large Law of Demeter busting method chains on them.

        GlobalSingletonReference.getInstance().getSomethingElse().andItsChild()
            .lawOfWhatExactlyNow().pleaseStopSoon().noTheresMore().invoke()
    

    I think Java tools have given the ability to create larger programs than can be properly maintained.

    These features were seen as key to creating a more maintainable source code base especially at large scale.

    My Experience with TypeScript

    We had written a JavaScript prototype in Backbone and we ported that to TypeScript, so that we could compare and see how it went. I tried to use TypeScript as much as possible to be fair to the experiment. You can get away with basically writing JavaScript and passing it through the TypeScript compiler but that's no good to anyone really.

    I found it made my code look more like Java or C#. This was especially the case with class definitions. Defining a class hierarchy in JavaScript is terrible - needing to set the prototype to the parent's prototype, manually defining super etc. The TypeScript version is very familiar to a Java or C# coder. Our group of TypeScript programmers were converted Java/C++ programmers so this was a huge bonus.

    Having interfaces was great. They're very useful in defining APIs and especially so for documenting external APIs. One thing I hate about JavaScript, is having to read documentation or readme files for third party libraries in order to find out their API. An interface definition in the language itself is far superior, as it is a lot more concise and guaranteed to be correct, having gone through the compiler.

    In the end, the code had the same classes with the same class names but the class implementations were far more readable due to the OO nature of TypeScript and the ability to define and program to interfaces.

    I did find that refactoring was easier - operations like adding additional parameters to functions were trivial compared to JavaScript. For the JavaScript code, I had to rely on my unit tests to assure me that my refactorings were correct but here I could offload a lot of those tests to the compiler.

    When we started on TypeScript it was version 0.8. The compiler was a bit rough then and crashed on some invalid input rather than reporting errors. It has been steadily improved since then and version 1.0 is perfectly fine for us, reporting the correct errors for all the previously crashing cases. Also the language has been added to and improved over time.

    Things I didn't like about TypeScript

    On the flip side there are a lot of things that I don't really like about TypeScript. Some features of JavaScript e.g. different return types, can't be represented in TypeScript. In these circumstances you find yourself using the "any" type - the equivalent of using Object in Java. The problem with this is that using it completely circumvents the type checker. Thus the more complex code ends up being the code with less type checking.

    Other JavaScript features such as mixins have such horrible syntax in TypeScript (see here) that they're basically unusable. Mixins in particular require repeated boilerplate code to get past the compiler. That was a pattern with a lot of the issues I had with TypeScript. As you try to do the more dynamic JavaScript stuff, you end up writing and repeating declarations to get the compiler off your back. Ideally there would be some way to say to the TypeScript compiler that we are going to implement this interface dynamically - the implementation may not be here now but it will be at runtime. We ended up generating a lot of this boilerplate code using Ruby and Erb (a topic for another blog post).

    I tried debugging with source maps in Chrome but I wasn't a fan of the experience. Breakpoints would always get shifted a few lines and it was hard to get them to break on a function. I was constantly wondering if I had the correct version rather than a cached sourcemap and did the TypeScript match up to the JavaScript. I ended up just using the compiled JavaScript for debugging.

    Continuing with the last point, with some TypeScript features, you need to know what type of code was generated e.g. did a variable assignment get generated in the constructor or on the prototype. For example, this is necessary when integrating with Backbone. Instance variables in TypeScript are not defined on the prototype but instead in the constructor after the call to super. This means that they are not defined by the time the Backbone constructor is called. The Microsoft solution is to put the call to super in the middle of the constructor but this looks wrong to any Java programmer and I could see them inadvertently breaking the code by moving super to the top of the constructor.

    TypeScript's support for Generics was almost good but again there are some issues. The main one I ran into is that you can't create a new instance of the generic type e.g. for a generic type T you can't do var x = new T(). There are ways around this by passing in functions that create objects but the code they lead to is fairly bad.

    The idea that types would lead to better tooling didn't pan out for us. Taking IDEs first, I think the main tool that supports TypeScript is Visual Studio. There is also JetBrains' WebStorm. As IDEs go these seem perfectly fine. It's a bit hard for me to evaluate this as I'm not a fan of large IDEs. One issue with these is that, especially in the case of Visual Studio, they require large license fees. I don't like criticising tools on cost issues, as I feel that companies should treat these as a required cost of hiring programmers. Unfortunately, a lot of companies don't, so if I'm required to buy a personal license, I much prefer to buy a license for a tool like Sublime Text.

    TypeScript files import other files by means of a reference path at the top of the file. This is almost like Java except unfortunately the compiler does not enforce these, thus requiring them to be manually maintained. This is impossible to get right for a large project. The only essential ones are those for your base classes but if you leave out the others then IDEs have problems locating type declarations. If you have extraneous references that are not technically needed then this can lead to the TypeScript compiler generating invalid code that defines subclasses before their parent classes. When run, these cause runtime exceptions. Not a great situation.

    There aren't a great number of TypeScript plugins for Sublime Text and there is no official one from Microsoft. Also there are no code quality tools such as linters. It's not much use running the JavaScript versions as the only thing they can run on is the compiled code. The set of tools available for JavaScript is much larger and more mature. Even where you would think that having types would allow for newer tools e.g. static analysis or dependency graph generators, there is nothing.

    From a language point of view I wonder if trying to make all valid JavaScript code be valid TypeScript code is harming them? Would they be better going for a more C# like language and mandating that any JavaScript should be in separate files? That's what we ended up doing anyway - we didn't want to mix our JavaScript and TypeScript codebase.

    Integration with Third Party Code.

    In order to use external JavaScript files in TypeScript, you must first create a definition file for the JavaScript API. This declares, in a manner similar to Java interfaces, the functions, classes and interfaces that the JavaScript code exposes. These files can be a pain to locate and maintain. There is a Github repository Definitely Typed which maintains a collection of .d.ts files for popular JavaScript libraries. These are typically of a high standard but we have had to add missing functions to some Backbone .d.ts files. If there is none online you have to write one yourself which can involve reverse engineering the API and types of the library.

    I think there is a large risk in using these, given that they are neither maintained by the library owners in question or by Microsoft themselves. It is problematic to update the libraries as now you have to also update the .d.ts. files. Everything going well, the Definitely Typed version will be updated to the latest version but there are no guarantees. What happens if the maintainer of this repository gets fed up and stops updating the files?

    However once they are found, these .d.ts files can be extremely useful. When working with websockets, the TypeScript lib.d.ts file was the best documentation I found on the subject. I think the interface/protocol concept is a great addition to any programming language. It is especially useful for documenting APIs and it harms Ruby and JavaScript not to have such a construct.

    It can be a bit tricky to integrate your TypeScript code with existing JavaScript libraries. As outlined above with Backbone, some libraries need to have code that's on the prototype so you need to know the code that TypeScript generates. Also where in the hierarchy does the library go? We found it best to have the JavaScript classes at the top of the inheritance tree and TypeScript in the subclasses.

    Conclusions

    One issue I'd have with TypeScript is trying to gauge Microsoft's commitment to the language. Are they really in this for the long term? For example, the code samples on their website haven't been updated in ages. Also how large can the TypeScript community get? Are there really going to be a critical mass of developers abandoning JavaScript for this - especially considering Microsoft's past attitude to the web and Internet Explorer. The amount of bridges they must have burned is quite large at this stage. If I'd suffered for years working around IE6's issues, the last thing I'd do is switch to Microsoft's new web language.

    Overall though, I think it was worthwhile for the company to use TypeScript. The pros outweigh the cons, especially once you identify the issues with TypeScript and develop coding standards to avoid them. As a developer I would have preferred CoffeeScript as a JavaScript replacement but I can see how it would be easier to shift Java developers over to TypeScript. I think its given them a lot of security that they wouldn't have had with JavaScript.

    posted on September 19, 2014development


    Automating Jasmine Unit Tests

    For the first cut at automating my JavaScript unit testing, I started running them from the command line via PhantomJS. PhantomJS is a headless browser so it will render my HTML & CSS and execute the JavaScript, but will not display it on the screen. The steps I followed were:

    • I installed phantomjs from here using homebrew brew install phantomjs.

    • I got the command for running Phantom.js here.

    • I found the default output from Phantom.js to be lacking in detail. I came across a good link here which shows how to add stack traces on failure and how to add colours to the output using a console reporter.

    In future, I'd like to add this to a build system which will run jshint on my code and also do whatever minfication/optimizations are needed. It's looking like Grunt is a good tool for this so will investigate that further.

    posted on July 10, 2013development


    The Design of iCloud

    There's been a lot of blog posts from developers recently about the problems with iCloud syncing. The Verge has a great summary here. There are quite a few who are removing iCloud from their products and going with other syncing options such as Dropbox.

    The impression I get from these discussions is that it's the reliability of iCloud that's the problem, i.e. if iCloud was rock solid then it would be a great option for your app. I disagree with this view and I think the design of iCloud is fundamentally flawed.

    I think that even if iCloud Database syncing was perfectly reliable, it would still be a bad way of syncing data. One conclusion I've drawn from looking at web backends and Rails in particular, is that iCloud is only useful if you want to stick to Apple devices. There is no way to get at this data outside of iOS or OS X. In particular it is impossible to access this from a web application. For this reason, I think it's vital to have a proper backend if you are storing data in the cloud.

    The other mode of iCloud syncing is the Document based syncing. The issue I have with this is that anything stored in iCloud is restricted to the application that created it. This is a major issue when an application stores data in a common file format (e.g. plain text or image formats such as PNG or JPEG) that you may expect to be able to use in another application. Dropbox is a far superior solution in this case. I feel much more confident in the apps which use this over iCloud, as I will always have access to the data files.

    The other day, Brent Simmons posted a great proposal for an Apple backend service. They really need to do something here as their competitors aren't standing still. Microsoft is on the right track here with Azure and similarly Amazon's cloud computing services are going from strength to strength. It will be interesting to see if anything will be announced for iCloud at WWDC.

    posted on June 4, 2013development


    Test Driven Design in practice

    I recently tried implementing a JavaScript project at work using the testing methods I've learned from the Destroy All Software Screencast. It ended up being some of the best code I've written. The interfaces grew neatly, it wasn't not over designed and it was completely covered by tests. It's the project that I have the most confidence in its correctnesss. It's nice to know that despite any modifications in future, as long as all the tests pass it will pretty much always work first time.

    Anywhere I've worked up to now, testing was always seen as something that you implemented after the fact. Code coverage was the main driver of the testing. However this approach completely misses the input that TDD has on the design of the application. By writing applications so that they can be tested easily, they turn out to be much better designed. They are less coupled and all the dependencies are visible. Having the design emerge from the growing system is better than imposing over elaborate architecture and patterns top down.

    I found a couple of good resources recently on testing and the impact it has on your code. This is a good talk by Michael Feathers on the synergy between testing and design. He shows how testing problems are indicative of design problems. Misko Hevery's site has some great presentations and resources on how to design code that is testable.

    posted on May 22, 2013development


    Learning JavaScript

    Last year, the project I was working on at work switched languages for its UI code from C++ to HTML and JavaScript. For me, this meant learning JavaScript and web development.

    When I was in college, we studied Java as the "proper" programming language and barely covered JavaScript - only as part of a HTML course. Back then, I never really saw it as anything more than a language for adding simple dynamic features to a web page. However ten years later, and (hopefully!) knowing a great deal more about programming, my opinions have changed. Now, I have a whole new respect for JavaScript, based on features that I wouldn't have been able to really comprehend back then.

    I love the power that first class functions and closures give you. It's spoiled me as a programmer as I'm finding it hard going back to languages without those features! I know that they are coming, or have recently come, to Java and C++. However given the nature of exising legacy codebases in those languages and that projects may be restricted to earlier compilers, it'll be a while before they're mainstream.

    There are no shortage of in-depth JavaScript books and tutorials which teach all the features of the language. However learning JavaScript syntax and features is not the problem. The real issue is knowing what features to avoid. Unfortunately it's incredibly easy to write unmaintainable code in JavaScript if you're not careful. Luckily there are some very good books written on this topic. The ones I recommend are:

    • Douglas Crockford's JavaScript: The Good Parts. This is an really good compact book. It allows you to limit yourself to the features of JavaScript which support good software development practices.
    • Nicholas Zakas's Maintainable JavaScript. Again this book is more than a simple explanation of JavaScript syntax. It's topics include JavaScript programming practices and build automation. The build process part is especially useful for learning the proper process for building, linting and testing JavaScript code.
    • Marijn Haverbeke's Eloquent JavaScript. This is available for free on the website. The really cool part about the site is that all the JavaScript code snippets are interactive and can be run on the page while you are reading them.

    posted on May 5, 2013javaScriptdevelopment


    Irish Rail's RealTime API

    I spent a lot of time in my childhood and early twenties traveling on the Irish Rail Network. Mostly this seemed to involve waiting for ages at Limerick Junction or barely finding a space to stand on a packed train from Dublin. Those experiences didn't exactly leave me with a great impression of Iarnród Éireann.

    Given this, I was amazed to find out recently, that Iarnród Éireann have an XML API for accessing realtime data about the trains running on their network. Credit where credit is due, this is an excellent idea and I wish more companies would implement something similar.

    The API provides functionality for getting a list of the stations and what trains are due at those stations in the next ninety minutes. It also gives a list of all trains active on the network. It can filter by DART, Suburban or Mainline trains (that leaves some trains in an 'other' category - I'm not too sure what these actually are).

    I used this API as part of a Backbone learning project. It was quite fun to do. The API returns latitude and longitude coordinates for each station and train, allowing them to be plotted on a Google Map widget. I never realised there were so many stations in Ireland until I saw them plotted on the map.

    One issue I ran into was testing the application with live data. Given that I was programming this at night after work, you'd soon reach a time when there are very few trains left on the network!

    One technical detail about the API is that it is in XML rather than JSON. This means that I can't use Ajax or JSONP to get the data, due to the same origin policy. Instead I had to bounce the results through YQL. YQL exposes a SQL like interface to web data. I'm only using a basic 'select all' query here but looking at their site you can do lots of cool and complex stuff. I found a good tutorial from Cypress North on how to use YQL in your code to consume an XML API.

    posted on April 23, 2013development


    Controllers in Objective C

    One of the things to learn about iOS is the MVC model. The Cocoa implementation of MVC has some differences compared to the traditional approach. For example, in Cocoa, views are not aware of the models and don't listen for model updates. Instead the all events pass through the controller, i.e. it listens for model changes and then tells the view to update itself.

    In Rails, the constant refrain is that controllers should be thin. However on iOS they seem to be absolutely huge. One joke that I saw on Twitter was that on iOS, MVC stood for Massive View Controller. For example, the Recipe sample application from Apple has controllers with hundreds of lines of code with one topping out at 600 LOC.

    One of my issues with these view controllers is that they don't follow the Single Responsibility Principle, but instead combine multiple functions. They act as delegates for multiple protocols e.g. table data source, fetched results controller delegate etc. I find it hard to distinguish the separate elements of MVC when one class is doing everything. Also in Objective C, once everything is in the same file, it's not obvious to which protocol an item belongs. I think this risks breaking the MVC boundaries. For example, during a refactor, if you're not careful, you can easily get model variables depending on controller variables and vice-versa.

    I'd much prefer it if these controllers were split out into lots of different classes, each with a single job as per the SRP. This would lead to a more composition-based rather than inheritance-based codebase. I also think that this greatly helps with code navigation. Jumping to a small, focused file has the effect of filtering out irrelevant code. I've started using Sublime Text recently and it has great functionality for navigating between files, so I prefer having lots of smaller files rather than a few large monolithic classes.

    (On a side note this is one thing that really annoys me about XCode. Given a properly nested folder structure with well named files, I think it becomes a lot easier to find your way around the app. For example even after only a few weeks learning Rails, I know exactly where to look to find the controllers, models, db code etc. But XCode is a disaster here. It doesn't push the groupings made in the app on to the file system underneath. It requires duplicate effort of organising the code both inside and outside the application to keep the codebase properly organised.)

    posted on March 17, 2013development


Next page

Tags

By year

  1. 2020 (14)
  2. 2019 (17)
  3. 2018 (2)
  4. 2017 (11)
  5. 2016 (3)
  6. 2015 (6)
  7. 2014 (3)
  8. 2013 (11)
  9. 2012 (25)