« May 2011 | Main | June 2012 »

April 2012

Unit-Testing JavaScript with QUnit, Sinon, Phantom and TeamCity

(This is part 4 of a series of posts on HTML5 from the point of view of a .NET developer)

To continue yesterday's description of the frameworks and libraries I'm using for most of my projects, I'd like to write a bit about the unit-testing and test running environment I'm currently preferring. (As always: nothing here is explicit sage advice - it's really just a collection of tools which works for me at the current point in time. My preferences might change, if I come across other tools which improve my workflow.)

For JavaScript unit testing, my current main tool is Qunit for specifying and implementing tests. I've added a few small additions which allow me to run individual tests (based on URL parameters) instead of full test modules; a feature I mainly use during development when I interactively work on certain features. (Unlike some of my friends who are very strict TDD-followers, I'm a very interactive developer, and spend quite a bit of time in Chrome's debugger and console while writing code. During this phase, I sometimes use my unit-tests simply to start off an interactive exploration in the JavaScript REPL console …)

Together with Qunit, I regularly use Sinon.js which provides a very nice implementation for spies, mocks, and stubs. Most importantly though, it can fake timers, XHR and server-requests.

When faking timers, Sinon takes over window.setTimeout() and friends and allows you to explicitly control the passing of time. You can for example do things like the following:

var sinonTimers = sinon.useFakeTimers();
var called = false;

window.setTimeout(function() {
    called = true;
}, 100);

ok(!called,"Callback has not been invoked");

sinonTimers.tick(99);
ok(!called,"Callback has still not been invoked");

sinonTimers.tick(1);
ok(called,"Callback has now been invoked");

sinonTimers.restore();

This allows you to run time-based tests (timeouts, animations, …) without actually having to wait for wall-clock time to pass.

Running Tests

For automated execution of tests, I mostly use TeamCity which is also the CI environment for the compilation of our server-side components. To run the client-side JavaScript unit tests alongside the .NET unit tests, we rely on Phantom.JS which is a head-less WebKit implementation with a JavaScript API. This means that you can use JavaScript to drive an invisible browser, which provides you with a full WebKit-style implementation of the DOM and the complete stack of client-side behavior.

To combine Phantom.js with Qunit, we're using an adaptation of two scripts written by José F. Romaniello, the first one describes how to enable command-line execution of tests with Phantom and the second one allows the TeamCity test runner to interpret the results of your tests. You can find some more detail about the required output format for TeamCity here.

In TeamCity, we then configured a build step as a command-line invocation which calls "c:\windows\system32\cmd.exe" with the parameter "/c c:\some\directory\runtests.bat". runtests.bat in turn simply invokes Phantom.js with the above-mentioned JS file to start the test run by pointing to a URL which includes all JavaScript tests in our project.

Future plans

I have to admit that - as always - the situation is quite nice, but currently not 100% perfect. The detailed error information for failed unit tests, for example, is currently only available in the build log but not in the test overview. But this is one of these things which simply seem to stay in place as soon as a state of "good enough" has been reached.

One other things we plan for the future is to use additional test runners which will execute the tests on different real browser instances. But for now, the above is good enough and gives us very reliable and consistent results.

Frameworks, Libraries, and more ...

(This is part 3 of a series of posts on HTML5 from the point of view of a .NET developer)

One of the recurring questions when I tell someone that my new company's product is a "real" 100% HTML5 application and not just a fancified web site is about the libraries which we've chosen and why we've done so.

Whenever discussing libraries and frameworks for JavaScript applications, I think it's very important to point out that there are two very distinct classes of them: on one hand, you'll see a lot of general purpose libraries focused on smaller tasks. On the other hand, a number of prescriptive frameworks (for MVC, MVVM, storage, …) have been created, which recommend/support/suggest/mandate certain patterns, architectures and implementation paths.

General Purpose Libraries I'm using

In this post, I'd like to focus mainly on general purpose libraries. These are usually smaller libraries (apart from jQuery itself) which provide a distinct set of functionalities to be used in various kind of web application. A lot of them are very useful, no matter if you're creating a big single-page application, or a collection of more classic page-by-page web applications. The ones I generally use are also rather mature and of certain age (and therefore lower-risk).

The same thing unfortunately can't be said about the majority of prescriptive framework-style libraries. But that's definitely a topic for a future post.

jQuery and jQuery UI

jQuery shouldn't need much more than a simple mention. It's quite likely been the base for the majority of AJAX-enabled web sites for the last five years or so. Biggest difference to, say, Dojo Toolkit: no real component model. No modularization.

In general, that's not a big issue for me, but sometimes (especially when I'm writing native applications for iOS or Android, which might wrap a number of individual web views) the per-page initialization time might be critical. In this case, I tend to look towards zepto.js or similar low-impact jQuery derivatives instead.

less.js

CSS is fine. LESS is better. It gives you hierarchical CSS with support for constant and variable mixins. (The only thing I tend to miss is a built-in and cleaner handling for media queries.) less.js interprets LESS styles in the user's browser.

To reduce the risk of funny things happening over-the-air or with downlevel browsers, you could also instead compile LESS on the server side to give you regular CSS (there are a variety of compilers for the different development environments). Be aware though, the some of these compilers seem to be rather peculiar about their interpretation of certain LESS statements.

WARNING: Some mobile phone providers tend to transparently and automatically intercept HTTP requests over 2G, 3G or 4G links to optimize transfer times over high-latency communication links. This usually means that a proxy will cache your CSS and will even combine multiple CSS files into one to be sent directly in the HTML response. If you already use some kind of CSS and JS minification tool, this behavior is not nice, but it's still a fact of life you will have to deal with. One of the most obvious examples of this interference is the that it might render your LESS links invalid because the intercepting proxies tend to change the content-type specifier for the link which is used by the client-side LESS.js to find out which files need to be translated from LESS to CSS.

There are two possible workaround for this problem: for one, you could switch to HTTPS instead of HTTP as this pretty much guarantees that the HTML you send is the exact HTML the client receives. The other alternative is to send the following HTTP headers to convince your mobile operators to do the right thing (but unlike the HTTPS-solution, your mileage might vary, depending on whether the proxy honors these requests or not):

Cache-Control:no-cache
Cache-Control:no-store
Cache-Control:no-transform
Expires:-1
Pragma:no-cache

Handlebars.js

Today, handlebars.js is my preferred templating library. It supports markup extensions (helpers), so that you don't have to artificially create specific models which encapsulate display-only logic for individual views. I'm currently generally a big fan of template-only without too much databinding/MVVM-magic. (But that's a topic for a future post …)

jQuery BBQ (jquery.ba-bbq.js)

jQuery BBQ (Back Button and Query library) provides clean Hashchange-detection and emulation for browsers down to IE6. Work's perfectly and is a must-have staple for all single-page applications.

underscore.js

When I don't feel like writing for-loops to filter data, underscore.js is my preferred means for functional data manipulation. It's the closest you'll get (for now) to Linq-style collection code while remaining within JavaScript.

FullCalendar

FullCalendar is the calendar library for jQuery today.

jquery.fastclick.js

jQuery.fastclick.js is a library based on an article on Google's Mobile Developer Blog which illustrates how you can avoid the 300ms delay when clicking a button or link on most mobile touch-enable devices. It helps your application react instantaneously to more closely mimic the behavior of native applications.

Today, I'm using different fastclick-derivatives which are more specifically tailored for each particular project. (Mainly depending on whether or not the application is running solely in Phonegap-style wrappers so that it only has to support touch, but not click)

iso8601.js

iso8601.js is a library for parsing and writing dates from/to ISO8601 strings. As long as JSON doesn't define a standard date format, I'm using this library to take matters into my own hands.

We're also generally not transferring timezone information in the date/time itself but communicate this data out-of-band. (After all, it's no big fun if you switch to a server in a different timezone only to realize that now all your calendar-entries are off by a day)

Unit Testing

These were most of the general-purpose libraries I'm using on day-to-day basis. In the next post, I'll talk about the libraries and environments I'm currently using for unit testing and test-automation.

On the Love for JavaScript

(This is part 2 of a series of posts on HTML5 from the point of view of a converted .NET developer)

JavaScript is hardly everybody’s favorite language. I’ve talked to numerous developer in the Microsoft-centric .NET developer universe (which was more or less my exclusive area of work from 2001 to 2010) and a lot of them found interestingly graphic ways to express their disdain for this language. Some of them were 100% clear that a shift of their company towards JavaScript would initiate their immediate departure from said organization. (If I’d draw a comic strip about this, I’d have to use my monthly allotment of symbol-characters within the first frame. It would look a lot like static line noise just after your modem’s carrier started to drop, if you know what I mean.)

But why is this language is so polarizing? Well, at first, it’s an old language. With a lot of baggage. Designed in a different world. But these are things you can work around … otherwise Visual Basic 6 wouldn’t have had that many vocal supporters when .NET was introduced in 2001/2002. And C++ would have none today.

JavaScript’s Issues

There are however two critical issues with JavaScript.

For one, JavaScript is largely paradigm-free. You can write procedural code, functional code and OO-code with it. But the latter comes with its own culprits: JavaScript is a prototypical-language, not a class based one. This basically means that objects inherit behavior from other objects, not from classes. But life would still be easy at this point … if it weren’t for the fact that most modern JavaScript code also heavily relies on closures. For someone living in the .NET space, the flexibility of closures in JavaScript is dramatic. (And if you live in the Microsoft space, you can in fact fare quite nicely without ever explicitly knowing what a closure is … so don’t mind if you don’t yet know how powerful they can be.)

The second - and maybe more important issue - is quite likely simply rooted in the history of the language: if you base your love or hate of JavaScript on your recollection of code written before, say, 2006, you’re missing out big time. The pre–2006-style of code largely outdated and no longer used. But it’s not that JavaScript has received some new and powerful features afterwards, instead it’s simply just that people started to write larger-scale software with JavaScript around that time. And this lead to the creation of frameworks like the Dojo Toolkit in 2004, Prototype in 2005 and jQuery in 2006.

These frameworks not only provided solutions to important issues at that time, but they also provided guidance to other JavaScript developers. The world started to shift from mainly-procedural onclick-handlers with hundreds of global variables towards sophisticated, encapsulated, extensible and reusable code. It really seems that this paradigm-free language was in dire need of some paradigms.

Learning to love It

I generally believe that there’s going to be a substantial amount of JavaScript in our future. And even if you decide that you don’t want to write the language, you should at least be able to read it.

If you’re - like I was in 2010 - returning back to this language, I’d like to recommend this one book, which has dramatically changed the way I think about JavaScript. I learned that basically everything I thought about JS was wrong … nothing more, nothing less.

cat

JavaScript - The Good Parts by Douglas Crockford

But no matter if you plan to spend a lot of time with this language, or if you just want to know enough to get by, there’s one thing which you absolutely, positively have to understand to be able to read any current JavaScript code. It’s how this language deals with closures and how they are used to create private object members. Douglas Crockford has written about this in 2001, and in the meantime, the approach outlined in Private Members in JavaScript (and comparable ways of reaching the same goal) is one of the main ways how encapsulation is achieved in JavaScript.

Now, truth to be told, I’d actually recommend that you read the book if possible, because his writing and his way of explaining things is quite a bit better in the book compared to the old online-article. But the end-result remains the same: only after understanding how closures work in JavaScript will you be able to read the majority of today’s JavaScript code. It this is going to be vitally important for a .NET developer — because this is one of the two big differences (prototypes being the other) between your language of choice and JavaScript.

This single thing stood between me and my enjoyment of this language for a very, very long time.

Now after making the language work for you, it’s time to look at the various frameworks and libraries, which is going to be the focus of the next posts …

18 Months of HTML5 - Will this work for me?

(This is part 1 of a series of posts.)

One of the main questions I usually get from our clients when talking about these ideas is: Will this HTML5-thing work for us? While I can understand the importance of the question, I think it really depends on “what do you want to get out of it”? Or to ask differently: “where do you feel the limitations of your current environment”?

For me, the limitations I personally felt (when creating WPF or Silverlight applications) were the following:

  • I want an easy way to reach both desktops and tablets (iPad, essentially)
  • But I still want to run the same apps on the web
  • I need to create interactive applications which work with locally stored data. Apps which also work when offline.
  • I want to support Android and iOS phones

And quite importantly: I am looking mainly at business applications. Things which help people get their jobs done. Not games. Not apps people run for fun. (Is this important? I don’t know. I want to create business apps which people actually have fun to use. But I digress …)

So, I think the main drive for me was the increased reach. If this is your goal, then HTML5 (and of course, whenever I mention HTML5 in this series, I’m always talking about HTML5 and the related technologies like JS and CSS3) might be a nice way to get there. Is it the only way? Definitely not. And if you or your team are absolutely uncomfortable with the idea of writing JavaScript, not too happy about working with the idiosyncrasies of CSS and don’t really want to get used to constantly seeing rendering differences in HTML markup itself then you might want to stay away from it. You’d have a tough learning curve ahead of you and there might be lower hanging fruits which yield dramatically better ROI for you.

Browser Differences and Standards Support

One of the main points of criticism towards HTML5 is the lack of consistent (and - of course - the lack of complete) standards support. This drives one of the most important decisions you’ve got to make: what amount of support and functionality do you want to provide to downlevel browsers? (and “downlevel” might involve a feature-by-feature check for small-grained decisions). For the systems I’ve worked on, it was usually ok to fall back to classic client/server web behavior or to provide alternative plugin-based solutions for older browsers (for example, to use Flash-based charts instead of Canvas-based ones for older browsers). Some purists might object to this, but for me, HTML5 is only a means to an end: if it allows me to use Canvas-based charting and HTML5-based local storage in mobile devices, I’m still more than happy to use Flash-based charting and storage for older Desktop environments.

I also think that life might be easier if you are mainly addressing web savvy consumers or users in small companies, as you might be able to get by with targeting only the newest browsers: after all, these users will usually take advantage of their browsers’ and operating systems’ auto update functionality. Enterprise users however are usually very different: IE7 and Firefox 3.0.x are still very much a reality in early 2012. So while HTML5 might get you onto your enterprise clients’ iPads and iPhones, it might not get you onto their desktops.

For my new company’s product, we’ve for example decided to support the latest two versions of all major desktop browsers, as well as Android >= 2.2 and iOS >= 4.0. (We’re actually supporting more browsers than this, but this selection is at least our baseline). Our product is commercial software, so we expect clients to spend some time evaluating it before committing to purchase access to it. We also believe that we can convince them that our software saves them more than enough time to make it worthwhile for them to switch to a newer browser to get access to all features.

Ship Your Own Browser

At the first glance, this situation would be hugely different if you’d target the “Enterprise Market”.

Let’s just assume that you today create WPF applications on Windows and you’ve been toying with the idea to move forward towards HTML5 to get bigger reacher. The above analysis might seem to settle matters for you: with IE7, there simply is no HTML5. And the whole idea of creating your own levels of abstraction on top of fallback solutions for different browsers might not be your cup of tea at all.

In this case, you’d always have the possibility to simply bring your own browser.

Now, of course, I’m not talking about shipping IE10 with your application. Instead, you might think about wrapping your application in the reusable rendering components of one of today’s browsers (some of these approaches even work without any installation on a target machine). When looking at today’s browser market, you are basically looking at three different browser engines which make up the majority of desktop and mobile browsers: Internet Explorer, Gecko (Firefox) and WebKit (Apple Safari, Chrome, Android, iOS, Blackberry, …). WebKit itself has been open-sourced by Apple quite a few years ago and has been embedded by a multitude of applications and in a lot of places you wouldn’t expect them.

When following this bring-your-own-browser approach, you could basically create an xcopy-deployable (or maybe also an installable/MSI-deployable) shell application with a single GUI element: one large Web Browser control. You’d ship HTML5, JS and CSS files alongside your application so that these elements can run inside the Web Browser control. Yes that’s a bit simplified, but you get the idea: Your enterprise customers can then take your HTML5 application (the same one which would also run in a regular browser if they’d update their infrastructure) and deploy its MSI/installer-package using their usual management tools. All of this without you having to support the old and outdated browsers which are way too common in this environment.

And if wrapping WebKit’s C code on your own sounds like a bit too much complexity for you: don’t worry. There are ready-made implementations like Adobe’s free Air SDK (not “the Flash thing”, just the SDK) which provide exactly this: a known version of WebKit with a known API which you can ship alongside your application. If you are a more adventurous developer, you might even want to look into hosting your own WebKit environment directly — maybe by wrapping it in (the unfortunately currently somewhat outdated) WebKit.NET. And last but not least: for all kinds of mobile phones and tablets Apache’s Cordova project (formerly known as PhoneGap) might present a very reasonable shell. (And of course, in the not-so-distant future, this approach might also help you to get a substantial part of your application logic onto Windows 8’s WinRT-Environment as well.)

But isn’t this a lot of work?

But, yes, porting your application is a lot of work and you should be very careful to avoid overly optimistic ROI planning. In a lot of cases, it might simply not be worth the risk of abandoning a platform you’re very experienced with, in favor of cutting-edge, moving targets and unknown environments. In other cases, this increased reach might be exactly what you need to distinguish your application from your competitors’  …

But even then, there’s still the language! But that’s something for the next post in this series …

Life After 18 Months of HTML5 - Part 0

 

This is the introduction to a series of posts I’ve started to write in April 2012. Other posts in this series are:

 

 

In late 2010 I decided for myself to refocus my technical research and development efforts. After nearly 10 years of working with .NET, the limits of this technology become apparent: while Windows does great on servers and still runs a large part of the desktop space, the continuous progress in tablet and mobile environment was largely happening on non-Windows platforms.

The technical area which started to gain momentum back then was JavaScript running inside HTML5 compatible browsers. (Today, of course, JS is becoming more and more accepted as a viable development platform - in 2010 however, the mere idea of suggesting JS+HTML5 as the basis for business applications caused quite interesting reactions.)

My goal at that time was to demonstrate that JS+HTML can be a very productive development platform, which also increases the reach from just a single OS to multitude of modern browsers and devices. I wanted to create applications which can run as offline-capable Web apps on regular desktop and laptops, but which also offer a mobile-web-compatible GUI rendering; at the same time these apps should allow to be installed (like classic applications) on Windows machines, on Macs, on Linux and of course on iOS and Android. All of this should be achieved with the largest possible reuse of source code between these different platforms, while taking advantage of native platform features where reasonable.

Given that my background has always been with web applications and their server-side infrastructure, I felt quite at home in this world: JavaScript turned out to be an extremely productive language, and the independent choice of backends (for me mostly ASP.NET MVC and node.js) gave me the ability to pick a reasonable tool for each task.

That brings us here. After more than a year and a half of in-depth research, development of prototypes and several production applications for different clients (and for a new company I'm currently launching) I’m finally finding the time to write about my experiences with this technology stack. I have waited for such a long time, because I really wanted to write this not so much from the “enthusiastic early adopter”-point of view, but instead based on real applications and real issues,

In this series of posts, I plan to talk about the decisions we faced when implementing our clients’ solutions, the issues we faced regarding browser-differences, and of course the libraries, tools and techniques we're using.

(NB: to provide for full disclosure: I'm currently starting a new company in this space and we're implementing cloud-based business software (as a full product, not as developer toolkit or consulting projects) based on these approaches. While thinktecture will remain my primary home for consulting and developer-support, this reduces the number of engagements I can accept in 2012 and beyond. But thinktecture is not just me alone, and I was able to convince most of my colleagues to join the HTML5 train quite a while ago as well :-))