Interoperability Feed

thinktecture has hit the 10,000 downloads landmark on CodePlex

After thinktecture StarterSTS has hit the 5,000 downloads mark a few weeks ago (congrats again, Dom!), we now have hit 10,000 for our WCF-based Web Services contract-first tool

[Want to learn more about it and the idea behind it…? Read this MSDN Magazine article]

Thanks to a great team!


thinktecture StarterSTS now officially ‘powered by Windows Azure’

A few hours ago I got the final notice that StarterSTS is now officially allowed admittance to the Azure Cloud olymp:


OK, Dominick: up to releasing 1.5… Smile

Writing trace data to your beloved .svclog files in Windows Azure (aka ‘XmlWriterTraceListener in the cloud’)

Tracing is probably one of the most discussed topics in the Windows Azure world. Not because it is freaking cool – but because it can be very tedious and partly massively counter-intuitive.

One way of doing tracing is to use System.Diagnostics features like traces sources and trace listeners. This has been in place since .NET 2.0. Since .NET 3.0 and the rise of WCF (Windows Communication Foundation) there was also extensive usage of the XmlWriterTraceListener. We can see numberless occurrences of the typical .svclog file extension in many .NET projects around the world. And we can view these files with the SvcTraceViewer.exe tool from the Windows SDK.

All nice and well. But what about Windows Azure?
In Windows Azure there is a default trace listener called Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener from the Microsoft.WindowsAzure.Diagnostics assembly.

If you use this guy and want to trace data via trace sources, your data will be stored in Windows Azure Storage tables. Take some time to play around with it and find out that the data in there is close to useless and surely not very consumer-friendly (i.e. try to search for some particular text or error message. Horror).

So, taking these two facts I thought it would be helpful to have a custom trace listener which I can configure just through my config file which uses Azure local storage to store .svclog files. From there on I am using scheduled transfers (which I demonstrated here) to move the .svclog files (which are now custom error logs for Windows Azure) to Azure blob storage. From there I can just open them up with the tool of my choice.

Here is the simplified code:

using System.Configuration;
using System.Diagnostics;
using System.IO;
using Microsoft.WindowsAzure.ServiceRuntime;

namespace Thinktecture.Diagnostics.Azure
    public class LocalStorageXmlWriterTraceListener : XmlWriterTraceListener
        public LocalStorageXmlWriterTraceListener(string initializeData)
            : base(GetFileName(initializeData))

        public LocalStorageXmlWriterTraceListener(string initializeData, string name)
            : base(GetFileName(initializeData), name)

        private static string GetFileName(string initializationData)
                var localResourceItems = initializationData.Split('\\');
                var localResourceFolder = localResourceItems[0];
                var localResourceFile = localResourceItems[1];

                var localResource = RoleEnvironment.GetLocalResource(localResourceFolder);

                var fileName = Path.Combine(localResource.RootPath, localResourceFile);

                return fileName;
                throw new ConfigurationErrorsException("No valid Windows Azure local 
resource name found in configuration."); } } } }

In my Azure role (a worker in this particular case, but it also works with a web role) I configure the trace listener like this:

    <trace autoflush="true">
        <add type="Thinktecture.Diagnostics.Azure.LocalStorageXmlWriterTraceListener, 
               AzureXmlWriterTraceListener, Version=,
               Culture=neutral, PublicKeyToken=null"
             initializeData="TraceFiles\worker_trace.svclog" />

After scheduling the transfer of my log files folder I can use a tool like Cerebrata’s Cloud Storage Studio to look at my configured blob container (named traces)– and I can see my .svclog file.


Double-clicking on the file in blob storage opens it up in Service Trace Viewer. From here on it is all the good ole’ tracing file inspection experience Winking smile


Note: as you can see the Service Trace Viewer tool is not just for WCF – but you knew that before!


UPDATE: this does not properly work with Azure SDK 1.3 and Full IIS due to permission issues – there is more information in the SDK release notes. Very unfortunate Sad smile

Hope this helps.

Transferring your custom trace log files in Windows Azure for remote inspection

You can write your trace data explicitly to files or use tracing facilities like .NET’s trace source and listener infrastructure (or third party frameworks like log4net or nlog or…). So, this is not really news Smile

In your Windows Azure applications you can configure the diagnostics monitor to include special folders – which you obtain a reference to through a local resource in VM’s local storage - to its configuration whose files will then be transferred to the configured Azure Storage blob container.

Without any further ado:

public override bool OnStart()
    Trace.WriteLine("Entering OnStart...");

    var traceResource = RoleEnvironment.GetLocalResource("TraceFiles");
    var config = DiagnosticMonitor.GetDefaultInitialConfiguration();
        new DirectoryConfiguration
            Path = traceResource.RootPath,
            Container = "traces",
            DirectoryQuotaInMB = 100
    config.Directories.ScheduledTransferPeriod = TimeSpan.FromMinutes(10);

    DiagnosticMonitor.Start("DiagnosticsConnectionString", config);
    return base.OnStart();

Note: remember that there are special naming conventions in Azure Storage, e.g. for naming blob storage containers. So, do not try to to use ‘Traces’ as the container in the above code!

And a side note: of course this whole process incurs costs. Costs for data storage in Azure Storage. Costs for transactions (i.e. calls) against Azure Storage and costs for transferring the data from Azure Storage out of the data center for remote inspection.

Alright – this is the base for the next blog post which shows how to use a well-known trace log citizen from System.Diagnostics land in the cloud.

Hope this helps (so far).

Monitoring Windows Azure applications with System Center Operations Manager (SCOM)

Windows Azure contains a few options to collect monitoring data at runtime, including event log data, performance counters, Azure logs, your custom logs, IIS logs etc. There are not really good ‘official’ monitoring tools by Microsoft – besides a management pack for System Center Operation Manager (SCOM).

To get started monitoring your Azure applications with an enterprise-style systems management solution, you need to do the following:

  • Install SCOM. SCOM is a beast, but the good news is you can install all the components (including Active Directory and SQL Server) on a single server. Here is a very nice walkthrough – long and tedious, but very good.
  • Download and import the Azure management pack (MP) for SCOM. Note that the MP is still RC at the time of this writing but Microsoft support treats it like an RTM version already.
  • Follow the instructions in the guide from the download page on how to discover and start monitoring your Azure applications.

Voila. If everything worked then you will see something like this:

SCOM Azure MP in action


Note: This is a very ‘enterprise-y’ solution – I surely hope to see a more light-weight solution by Microsoft soon targeted at ISVs and the like.

Hope this helps.

Running a 32-bit IIS application in Windows Azure

Up there, everything is 64 bits. By design.

What to do if you have 32-bit machines locally – erm, sorry: on-premise – and want to move your existing applications, let’s say web applications, to Windows Azure?

For this scenario you need to enable 32-bit support in (full) IIS in your web role. The following is a startup task script that enables 32-bit applications in IIS settings by using appcmd.exe.


%windir%\system32\inetsrv\appcmd set config -section:applicationPools

And this is the necessary startup task defined in the service definition file:


   <Task commandLine="enable32bit.cmd" executionContext="elevated" taskType="simple" />

Hope this helps.

Windows Azure VM Role is still PaaS - if you want IaaS choose Amazon EC2, seriously

‘Nuff said?

Maybe we could blame Microsoft to name the new VM role feature (Windows Server 2008 R2 is supported as guest OS, 2011 should show broader OS support) introduced at PDC10 in a confusing way – but the fact remains:
it is based on the Azure service model & the Windows Azure Fabric Controller (FC) is still in charge of everything. Although you uploaded your own prepared VM the FC may and will decide to take your VM instances offline, start new instances, reprogram the load balancers etc.

Yes, Windows Azure Compute is about PaaS (Platform-as-a-Service), also with the VM role now in place. Don’t get confused by the new role feature name. Your applications (and thus roles!) need to be state-agnostic.

BTW: What is not possible with the VM role today is automatic OS updates/patching.
This means that you have no feasible option to have an up-to-date OS. When you try to run Windows Update this might work (actually it should). But then two things can happen:

  • your VM needs to reboot due to the Windows Update patches
  • FC decides to reboot your VM, or take it offline and hook up an new instance

In either case you end up with your original VM – bingo (and in the first case you will feel like in Groundhog Day… “Hey babe –dududu – I got you babe…”). Therefore, the official hands-on lab illustrates to disable WU entirely.

Think about the Windows Azure VM role, twice.