IdentityServer and IdentityManager are now part of the .NET Foundation

We would like to join the current public announcements of Microsoft’s stronger focus on open source, with the news that IdentityServer and IdentityManager have become part of the .NET Foundation. This allows the two Frameworks to grow alongside such notable projects as the .NET Core itself, ASP.NET vNext, ASP.NET MVC, Web API and SignalR.

Joining the Foundation provides these projects with a strong organizational backbone to increase the visibility and attractiveness of IdentityServer and IdentityManager to both, new users and new committers. As a current user of one of these projects, this will provide even stronger long-term safety of your investments in the use of these frameworks.

(Continue to the .NET Foundation’s announcement for more details …)

On the technical side, Dominick Baier and Brock Allen will remain the main technical leaders of this project and all of us are already looking forward to the final release of IdentityServer v3 within the not so distant future.

Just published: Modularizing AngularJS Applications

Earlier today, we've published another new chapter for, our ebook on AngularJS for .NET Developer. This chapter (Modularizing AngularJS Applications) discusses the options you have for seperating Angular-applications into multiple logical parts.

The next chapter, which I'll quite likely start to write in about a week, will build upon this one and will present the different options for physical separation (multiple files, ...), and how to deploy an AngularJS application inside the context of ASP.NET MVC. From there on we'll later examine different levels of integration with ASP.NET MVC backends (Web API, SignalR, claims-based security, ...)

Services and Dependency Injection in AngularJS

I've just finished the first draft of the chapter on 'Services and Dependency Injection in AngularJS' for our ebook at

This is quite an important topic in Angular, because the whole framework is based on DI principles. In fact, every Angular module is just a configure for the dependency locator. 

In the next chapter - which I plan to release tomorrow - I'll focus on the details of modularization of AngularJS applications, on module dependencies and on how they work in conjunction with DI.

Introducing: (AngularJS for .NET Developers)

Short Version/TL;DR: Christian and I are writing a continuously deployed ebook about AngularJS for .NET developers (or, more generally, for business application developers) at You can also follow us on twitter at @henriquatreJS

After nearly three years of creating client-side JavaScript applications, after much contemplating, framework-researching, trying, abandoning, trying something else, and abandoning it again, I've stumbled upon AngularJS a few months ago.

Before Angular, I have not been a big fan of most of the existing clientside JavaScript-frameworks. Either their forcing me into certain ways of creating models (using custom observable data structures instead of regular plain-old JavaScript objects), their overarching prescriptiveness (sometimes reaching far into the structure of server-side endpoints), their vastness (most of the time forcing one to either use it for a complete application, including routing, databinding, etc. or not at all) or simply the risk/benefit ratio based on the smallish size of their contributor community usually drove me away.

So I've had to create and maintain my own micro-frameworks during most of this time and really didn't appreciate it very much. But that was before AngularJS. Angular solves the majority of the issues I had with other frameworks: it allows me to use any JavaScript-object for two-way data binding to a GUI, it concentrates only on the client and does not prescribe anything for the server, it can be used bit-by-bit if desired (say, only databinding for one complex form in an application) and it is backed by a community of contributors and advanced users who are just fantastic.

Yes, and it even provides a simple way to create domain-specific extensions for HTML which are rendered to 'real' HTML on-the fly. This might of course be extremely appealing for anyone with a XAML background ...

So Christian and I have been working with AngularJS for a while and recently decided that we'd like to share our experiences in this area. We didn't want to create videos, because John Linquist's (which we've recommended to our clients time and time again) already provides really great screencasts. We also expect that our friends at Pluralsight (for presentation-style videos) will cover this area in great depth in the future.

But a paper book? No. That would definitely be too slow. Just imagine: AngularJS is currently making a major overhaul/addition to their client-side routing framework. Right now, actually. No, we could never cover these things in a book.

So we've decided to write a continuously published eBook. We're going to release individual alpha chapters throughout the next weeks at and we hope that this becomes a valuable resource for business-application developers looking into moving his experiences towards HTML5/JS-based client applications. 

As a start, we've today release three main chapters:

You can also follow us on twitter at @henriquatreJS to be notified whenever we release new content.

Compute performance in the cloud - EC2 vs Azure

Even though I have a very strong Microsoft-legacy, in the past year, my preferred cloud platform has actually been EC2. The reason was simply that I somehow seemed to prefer the IaaS model over PaaS for the use-cases I've worked with during that timeframe.

Given that Windows Azure has recently introduced persistent VM targets, I've decided to run a quick test to compare the performance between similar 64bit Small-sized VMs with Ubuntu 12.04 running on Amazon's EC2 vs. Azure. (Located in Azure's European zone and in AWS' Europe-West).

I've decided to simply run a few CPU-bound tasks and chose the (rather dated, but still) hardinfo benchmark. Please keep in mind that this benchmark only tests for CPU-bound loads, but not for I/O at all.

To re-run these tests on your own machines, you can simply log in to your VMs and install the benchmark tool with "sudo apt-get install hardinfo" and then run it with "sudo hardinfo".

The Numbers

Benchmark EC2 Azure
(Shorter == Better, apart from CryptoHash for which longer==better. Small 64bit instances of Ubuntu 12.04 LTS)

Please take these tests results with a grain of salt: they were just the result of about half an hour of free time and a desire to check some of the rumors I've heard regarding EC2's comparably bad performance.

In any case: I personally think that the result warrant a lot more research into performance (and of course also I/O capabilities) of on-demand cloud platforms …

Update (June 15, 2012): I've just had the chance to look into the discrepancy for CryptoHash. Turns out that the benchmark suite uses shorter==better for all tests apart from that one for which an inverse metric is used.

Unit-Testing JavaScript with QUnit, Sinon, Phantom and TeamCity

(This is part 4 of a series of posts on HTML5 from the point of view of a .NET developer)

To continue yesterday's description of the frameworks and libraries I'm using for most of my projects, I'd like to write a bit about the unit-testing and test running environment I'm currently preferring. (As always: nothing here is explicit sage advice - it's really just a collection of tools which works for me at the current point in time. My preferences might change, if I come across other tools which improve my workflow.)

For JavaScript unit testing, my current main tool is Qunit for specifying and implementing tests. I've added a few small additions which allow me to run individual tests (based on URL parameters) instead of full test modules; a feature I mainly use during development when I interactively work on certain features. (Unlike some of my friends who are very strict TDD-followers, I'm a very interactive developer, and spend quite a bit of time in Chrome's debugger and console while writing code. During this phase, I sometimes use my unit-tests simply to start off an interactive exploration in the JavaScript REPL console …)

Together with Qunit, I regularly use Sinon.js which provides a very nice implementation for spies, mocks, and stubs. Most importantly though, it can fake timers, XHR and server-requests.

When faking timers, Sinon takes over window.setTimeout() and friends and allows you to explicitly control the passing of time. You can for example do things like the following:

var sinonTimers = sinon.useFakeTimers();
var called = false;

window.setTimeout(function() {
    called = true;
}, 100);

ok(!called,"Callback has not been invoked");

ok(!called,"Callback has still not been invoked");

ok(called,"Callback has now been invoked");


This allows you to run time-based tests (timeouts, animations, …) without actually having to wait for wall-clock time to pass.

Running Tests

For automated execution of tests, I mostly use TeamCity which is also the CI environment for the compilation of our server-side components. To run the client-side JavaScript unit tests alongside the .NET unit tests, we rely on Phantom.JS which is a head-less WebKit implementation with a JavaScript API. This means that you can use JavaScript to drive an invisible browser, which provides you with a full WebKit-style implementation of the DOM and the complete stack of client-side behavior.

To combine Phantom.js with Qunit, we're using an adaptation of two scripts written by José F. Romaniello, the first one describes how to enable command-line execution of tests with Phantom and the second one allows the TeamCity test runner to interpret the results of your tests. You can find some more detail about the required output format for TeamCity here.

In TeamCity, we then configured a build step as a command-line invocation which calls "c:\windows\system32\cmd.exe" with the parameter "/c c:\some\directory\runtests.bat". runtests.bat in turn simply invokes Phantom.js with the above-mentioned JS file to start the test run by pointing to a URL which includes all JavaScript tests in our project.

Future plans

I have to admit that - as always - the situation is quite nice, but currently not 100% perfect. The detailed error information for failed unit tests, for example, is currently only available in the build log but not in the test overview. But this is one of these things which simply seem to stay in place as soon as a state of "good enough" has been reached.

One other things we plan for the future is to use additional test runners which will execute the tests on different real browser instances. But for now, the above is good enough and gives us very reliable and consistent results.

Frameworks, Libraries, and more ...

(This is part 3 of a series of posts on HTML5 from the point of view of a .NET developer)

One of the recurring questions when I tell someone that my new company's product is a "real" 100% HTML5 application and not just a fancified web site is about the libraries which we've chosen and why we've done so.

Whenever discussing libraries and frameworks for JavaScript applications, I think it's very important to point out that there are two very distinct classes of them: on one hand, you'll see a lot of general purpose libraries focused on smaller tasks. On the other hand, a number of prescriptive frameworks (for MVC, MVVM, storage, …) have been created, which recommend/support/suggest/mandate certain patterns, architectures and implementation paths.

General Purpose Libraries I'm using

In this post, I'd like to focus mainly on general purpose libraries. These are usually smaller libraries (apart from jQuery itself) which provide a distinct set of functionalities to be used in various kind of web application. A lot of them are very useful, no matter if you're creating a big single-page application, or a collection of more classic page-by-page web applications. The ones I generally use are also rather mature and of certain age (and therefore lower-risk).

The same thing unfortunately can't be said about the majority of prescriptive framework-style libraries. But that's definitely a topic for a future post.

jQuery and jQuery UI

jQuery shouldn't need much more than a simple mention. It's quite likely been the base for the majority of AJAX-enabled web sites for the last five years or so. Biggest difference to, say, Dojo Toolkit: no real component model. No modularization.

In general, that's not a big issue for me, but sometimes (especially when I'm writing native applications for iOS or Android, which might wrap a number of individual web views) the per-page initialization time might be critical. In this case, I tend to look towards zepto.js or similar low-impact jQuery derivatives instead.


CSS is fine. LESS is better. It gives you hierarchical CSS with support for constant and variable mixins. (The only thing I tend to miss is a built-in and cleaner handling for media queries.) less.js interprets LESS styles in the user's browser.

To reduce the risk of funny things happening over-the-air or with downlevel browsers, you could also instead compile LESS on the server side to give you regular CSS (there are a variety of compilers for the different development environments). Be aware though, the some of these compilers seem to be rather peculiar about their interpretation of certain LESS statements.

WARNING: Some mobile phone providers tend to transparently and automatically intercept HTTP requests over 2G, 3G or 4G links to optimize transfer times over high-latency communication links. This usually means that a proxy will cache your CSS and will even combine multiple CSS files into one to be sent directly in the HTML response. If you already use some kind of CSS and JS minification tool, this behavior is not nice, but it's still a fact of life you will have to deal with. One of the most obvious examples of this interference is the that it might render your LESS links invalid because the intercepting proxies tend to change the content-type specifier for the link which is used by the client-side LESS.js to find out which files need to be translated from LESS to CSS.

There are two possible workaround for this problem: for one, you could switch to HTTPS instead of HTTP as this pretty much guarantees that the HTML you send is the exact HTML the client receives. The other alternative is to send the following HTTP headers to convince your mobile operators to do the right thing (but unlike the HTTPS-solution, your mileage might vary, depending on whether the proxy honors these requests or not):



Today, handlebars.js is my preferred templating library. It supports markup extensions (helpers), so that you don't have to artificially create specific models which encapsulate display-only logic for individual views. I'm currently generally a big fan of template-only without too much databinding/MVVM-magic. (But that's a topic for a future post …)

jQuery BBQ (

jQuery BBQ (Back Button and Query library) provides clean Hashchange-detection and emulation for browsers down to IE6. Work's perfectly and is a must-have staple for all single-page applications.


When I don't feel like writing for-loops to filter data, underscore.js is my preferred means for functional data manipulation. It's the closest you'll get (for now) to Linq-style collection code while remaining within JavaScript.


FullCalendar is the calendar library for jQuery today.


jQuery.fastclick.js is a library based on an article on Google's Mobile Developer Blog which illustrates how you can avoid the 300ms delay when clicking a button or link on most mobile touch-enable devices. It helps your application react instantaneously to more closely mimic the behavior of native applications.

Today, I'm using different fastclick-derivatives which are more specifically tailored for each particular project. (Mainly depending on whether or not the application is running solely in Phonegap-style wrappers so that it only has to support touch, but not click)


iso8601.js is a library for parsing and writing dates from/to ISO8601 strings. As long as JSON doesn't define a standard date format, I'm using this library to take matters into my own hands.

We're also generally not transferring timezone information in the date/time itself but communicate this data out-of-band. (After all, it's no big fun if you switch to a server in a different timezone only to realize that now all your calendar-entries are off by a day)

Unit Testing

These were most of the general-purpose libraries I'm using on day-to-day basis. In the next post, I'll talk about the libraries and environments I'm currently using for unit testing and test-automation.

On the Love for JavaScript

(This is part 2 of a series of posts on HTML5 from the point of view of a converted .NET developer)

JavaScript is hardly everybody’s favorite language. I’ve talked to numerous developer in the Microsoft-centric .NET developer universe (which was more or less my exclusive area of work from 2001 to 2010) and a lot of them found interestingly graphic ways to express their disdain for this language. Some of them were 100% clear that a shift of their company towards JavaScript would initiate their immediate departure from said organization. (If I’d draw a comic strip about this, I’d have to use my monthly allotment of symbol-characters within the first frame. It would look a lot like static line noise just after your modem’s carrier started to drop, if you know what I mean.)

But why is this language is so polarizing? Well, at first, it’s an old language. With a lot of baggage. Designed in a different world. But these are things you can work around … otherwise Visual Basic 6 wouldn’t have had that many vocal supporters when .NET was introduced in 2001/2002. And C++ would have none today.

JavaScript’s Issues

There are however two critical issues with JavaScript.

For one, JavaScript is largely paradigm-free. You can write procedural code, functional code and OO-code with it. But the latter comes with its own culprits: JavaScript is a prototypical-language, not a class based one. This basically means that objects inherit behavior from other objects, not from classes. But life would still be easy at this point … if it weren’t for the fact that most modern JavaScript code also heavily relies on closures. For someone living in the .NET space, the flexibility of closures in JavaScript is dramatic. (And if you live in the Microsoft space, you can in fact fare quite nicely without ever explicitly knowing what a closure is … so don’t mind if you don’t yet know how powerful they can be.)

The second - and maybe more important issue - is quite likely simply rooted in the history of the language: if you base your love or hate of JavaScript on your recollection of code written before, say, 2006, you’re missing out big time. The pre–2006-style of code largely outdated and no longer used. But it’s not that JavaScript has received some new and powerful features afterwards, instead it’s simply just that people started to write larger-scale software with JavaScript around that time. And this lead to the creation of frameworks like the Dojo Toolkit in 2004, Prototype in 2005 and jQuery in 2006.

These frameworks not only provided solutions to important issues at that time, but they also provided guidance to other JavaScript developers. The world started to shift from mainly-procedural onclick-handlers with hundreds of global variables towards sophisticated, encapsulated, extensible and reusable code. It really seems that this paradigm-free language was in dire need of some paradigms.

Learning to love It

I generally believe that there’s going to be a substantial amount of JavaScript in our future. And even if you decide that you don’t want to write the language, you should at least be able to read it.

If you’re - like I was in 2010 - returning back to this language, I’d like to recommend this one book, which has dramatically changed the way I think about JavaScript. I learned that basically everything I thought about JS was wrong … nothing more, nothing less.


JavaScript - The Good Parts by Douglas Crockford

But no matter if you plan to spend a lot of time with this language, or if you just want to know enough to get by, there’s one thing which you absolutely, positively have to understand to be able to read any current JavaScript code. It’s how this language deals with closures and how they are used to create private object members. Douglas Crockford has written about this in 2001, and in the meantime, the approach outlined in Private Members in JavaScript (and comparable ways of reaching the same goal) is one of the main ways how encapsulation is achieved in JavaScript.

Now, truth to be told, I’d actually recommend that you read the book if possible, because his writing and his way of explaining things is quite a bit better in the book compared to the old online-article. But the end-result remains the same: only after understanding how closures work in JavaScript will you be able to read the majority of today’s JavaScript code. It this is going to be vitally important for a .NET developer — because this is one of the two big differences (prototypes being the other) between your language of choice and JavaScript.

This single thing stood between me and my enjoyment of this language for a very, very long time.

Now after making the language work for you, it’s time to look at the various frameworks and libraries, which is going to be the focus of the next posts …

18 Months of HTML5 - Will this work for me?

(This is part 1 of a series of posts.)

One of the main questions I usually get from our clients when talking about these ideas is: Will this HTML5-thing work for us? While I can understand the importance of the question, I think it really depends on “what do you want to get out of it”? Or to ask differently: “where do you feel the limitations of your current environment”?

For me, the limitations I personally felt (when creating WPF or Silverlight applications) were the following:

  • I want an easy way to reach both desktops and tablets (iPad, essentially)
  • But I still want to run the same apps on the web
  • I need to create interactive applications which work with locally stored data. Apps which also work when offline.
  • I want to support Android and iOS phones

And quite importantly: I am looking mainly at business applications. Things which help people get their jobs done. Not games. Not apps people run for fun. (Is this important? I don’t know. I want to create business apps which people actually have fun to use. But I digress …)

So, I think the main drive for me was the increased reach. If this is your goal, then HTML5 (and of course, whenever I mention HTML5 in this series, I’m always talking about HTML5 and the related technologies like JS and CSS3) might be a nice way to get there. Is it the only way? Definitely not. And if you or your team are absolutely uncomfortable with the idea of writing JavaScript, not too happy about working with the idiosyncrasies of CSS and don’t really want to get used to constantly seeing rendering differences in HTML markup itself then you might want to stay away from it. You’d have a tough learning curve ahead of you and there might be lower hanging fruits which yield dramatically better ROI for you.

Browser Differences and Standards Support

One of the main points of criticism towards HTML5 is the lack of consistent (and - of course - the lack of complete) standards support. This drives one of the most important decisions you’ve got to make: what amount of support and functionality do you want to provide to downlevel browsers? (and “downlevel” might involve a feature-by-feature check for small-grained decisions). For the systems I’ve worked on, it was usually ok to fall back to classic client/server web behavior or to provide alternative plugin-based solutions for older browsers (for example, to use Flash-based charts instead of Canvas-based ones for older browsers). Some purists might object to this, but for me, HTML5 is only a means to an end: if it allows me to use Canvas-based charting and HTML5-based local storage in mobile devices, I’m still more than happy to use Flash-based charting and storage for older Desktop environments.

I also think that life might be easier if you are mainly addressing web savvy consumers or users in small companies, as you might be able to get by with targeting only the newest browsers: after all, these users will usually take advantage of their browsers’ and operating systems’ auto update functionality. Enterprise users however are usually very different: IE7 and Firefox 3.0.x are still very much a reality in early 2012. So while HTML5 might get you onto your enterprise clients’ iPads and iPhones, it might not get you onto their desktops.

For my new company’s product, we’ve for example decided to support the latest two versions of all major desktop browsers, as well as Android >= 2.2 and iOS >= 4.0. (We’re actually supporting more browsers than this, but this selection is at least our baseline). Our product is commercial software, so we expect clients to spend some time evaluating it before committing to purchase access to it. We also believe that we can convince them that our software saves them more than enough time to make it worthwhile for them to switch to a newer browser to get access to all features.

Ship Your Own Browser

At the first glance, this situation would be hugely different if you’d target the “Enterprise Market”.

Let’s just assume that you today create WPF applications on Windows and you’ve been toying with the idea to move forward towards HTML5 to get bigger reacher. The above analysis might seem to settle matters for you: with IE7, there simply is no HTML5. And the whole idea of creating your own levels of abstraction on top of fallback solutions for different browsers might not be your cup of tea at all.

In this case, you’d always have the possibility to simply bring your own browser.

Now, of course, I’m not talking about shipping IE10 with your application. Instead, you might think about wrapping your application in the reusable rendering components of one of today’s browsers (some of these approaches even work without any installation on a target machine). When looking at today’s browser market, you are basically looking at three different browser engines which make up the majority of desktop and mobile browsers: Internet Explorer, Gecko (Firefox) and WebKit (Apple Safari, Chrome, Android, iOS, Blackberry, …). WebKit itself has been open-sourced by Apple quite a few years ago and has been embedded by a multitude of applications and in a lot of places you wouldn’t expect them.

When following this bring-your-own-browser approach, you could basically create an xcopy-deployable (or maybe also an installable/MSI-deployable) shell application with a single GUI element: one large Web Browser control. You’d ship HTML5, JS and CSS files alongside your application so that these elements can run inside the Web Browser control. Yes that’s a bit simplified, but you get the idea: Your enterprise customers can then take your HTML5 application (the same one which would also run in a regular browser if they’d update their infrastructure) and deploy its MSI/installer-package using their usual management tools. All of this without you having to support the old and outdated browsers which are way too common in this environment.

And if wrapping WebKit’s C code on your own sounds like a bit too much complexity for you: don’t worry. There are ready-made implementations like Adobe’s free Air SDK (not “the Flash thing”, just the SDK) which provide exactly this: a known version of WebKit with a known API which you can ship alongside your application. If you are a more adventurous developer, you might even want to look into hosting your own WebKit environment directly — maybe by wrapping it in (the unfortunately currently somewhat outdated) WebKit.NET. And last but not least: for all kinds of mobile phones and tablets Apache’s Cordova project (formerly known as PhoneGap) might present a very reasonable shell. (And of course, in the not-so-distant future, this approach might also help you to get a substantial part of your application logic onto Windows 8’s WinRT-Environment as well.)

But isn’t this a lot of work?

But, yes, porting your application is a lot of work and you should be very careful to avoid overly optimistic ROI planning. In a lot of cases, it might simply not be worth the risk of abandoning a platform you’re very experienced with, in favor of cutting-edge, moving targets and unknown environments. In other cases, this increased reach might be exactly what you need to distinguish your application from your competitors’  …

But even then, there’s still the language! But that’s something for the next post in this series …