ASP.NET Core with IIS: Setup Issues

If you are planing to run an ASP.NET Core application with IIS then this blog post might be worth a glance.

These are a few issues I run into ...

1. Targets in .xproj-file

If the project started with RC1 or earlier version of .NET Core then check for correct targets. Open the .xproj file and search for the following line

<Import Project="$(VSToolsPath)\DotNet\Microsoft.DotNet.targets" 
        Condition="'$(VSToolsPath)' != ''" />

and replace it with

<Import Project="$(VSToolsPath)\DotNet.Web\Microsoft.DotNet.Web.targets" 
        Condition="'$(VSToolsPath)' != ''" />

2. The process path in web.config

If you get 502 after starting the web then take a look into windows event viewer. One of the errors you will probably see is

Application 'MACHINE/WEBROOT/APPHOST/YOUR-APP with physical root 'C:\webapp\publish\' created process with commandline '"dotnet" WebApp.Selfhost.dll' but either crashed or did not reponse or did not listen on the given port '28236', ErrorCode = '0x800705b4'

This error means that IIS is unable to start your app using the command dotnet. To remedy this issue open web.config and change the processPath from dotnet to C:\Program Files\dotnet\dotnet.exe.

<?xml version="1.0" encoding="utf-8"?>
<configuration>
    <system.webServer>
        <handlers>
            <add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModule" resourceType="Unspecified" />
        </handlers>
        <aspNetCore processPath="C:\Program Files\dotnet\dotnet.exe"
            arguments=".\WebApp.Selfhost.dll"
            stdoutLogEnabled="false"
            stdoutLogFile=".\logs\stdout"
            forwardWindowsAuthToken="false" />
    </system.webServer>
</configuration>

3. When to call UseIISIntegration

If you still getting 502 then the possible cause of this error may be that the your application is listening on a different port as expected. This can happen if one of your configuration keys is Port. In this case your web is listening on this port instead of dynamically generated port.

The configuration of the WebHostBuilder causing the error can look as following:

var hostBuilder = new WebHostBuilder()
    .UseConfiguration(myConfig) // inserts config with key "Port"
    .UseIISIntegration()    // uses previously inserted port "by mistake"
    .UseKestrel()
    .UseStartup<Startup>();

To cure that just change the order of the calls because with .NET Core 1.1 the listening url when running with IIS will not been overwritten anymore.

var hostBuilder = new WebHostBuilder()
    .UseIISIntegration()
    .UseConfiguration(myConfig)
    .UseKestrel()
    .UseStartup<Startup>();

 


(ASP).NET Core Dependecy Injection: Disposing

After several years of using the same Dependency Injection (DI) framework like Autofac you may have a good understanding how your components, implementing the interface IDisposable, are going to be disposed.

With the nuget package Microsoft.Extensions.DependencyInjection the new .NET Core framework brings its own DI framework. It is not that powerful as the others but it is sufficient for simple constructor injection. Nonetheless, even if you don't need some advanced features you have to be aware of how the components are destroyed by this framework.

Let's look at a concrete example. Given are 2 classes, a ParentClass and a ChildClass:

public class ParentClass : IDisposable
{
	public ParentClass(ChildClass child)
	{
		Console.WriteLine("Parent created.");
	}

	public void Dispose()
	{
		Console.WriteLine("Parent disposed.");
	}
}

public class ChildClass : IDisposable
{
	public ChildClass()
	{
		Console.WriteLine("Child created");
	}

	public void Dispose()
	{
		Console.WriteLine("Child disposed.");
	}
}

At first we are using Autofac to resolve ParentClass:

var builder = new ContainerBuilder();
builder.RegisterType<ParentClass>().AsSelf();
builder.RegisterType<ChildClass>().AsSelf();
var container = builder.Build();

Console.WriteLine("== Autofac ==");
var parent = container.Resolve<ParentClass>();

container.Dispose();

With Autofac we are getting the following output:

== Autofac ==
Child created
Parent created.
Parent disposed.
Child disposed.

And now we are using .NET Core DI:

var services = new ServiceCollection();
services.AddTransient<ParentClass>();
services.AddTransient<ChildClass>();
var provider = services.BuildServiceProvider();

Console.WriteLine("== .NET Core ==");
var parent = provider.GetRequiredService<ParentClass>();

((IDisposable) provider).Dispose();

The output we get is:

== .NET Core ==
Child created
Parent created.
Child disposed.
Parent disposed.

Comparing the outputs we see that Autofac destroys the outer compontent (i.e. ParentClass) first and then the inner component (i.e. ChildClass). The .NET Core DI does not honor the dependency hierarchy and destroys the components in the same order they are created.

Most of the time the behavior of .NET Core DI is not a problem because the components just free internal resources and are done. But in some cases the outer component has to do something like to unregister from the inner component that may live on. If the inner component is/will not be disposed then all works fine; if not then we get ObjectDisposedException.

If you start a new project with .NET Core I suggest to stay with DI framework you are familiar with unless it is a sample application.

PS: Further information of how to switch from .NET Core DI to other frameworks in an ASP.NET Core application: Replacing the default services container  and ASP.NET Core with Autofac


.NET Abstractions - It's not just about testing!

With the introduction of .NET Core we got a framework that works not just on Windows, but on Linux and macOS as well. One of the best parts of .NET Core is that the APIs stayed almost the same compared to the old .NET, meaning developers can use their .NET skills to build cross-platform applications. The bad part is that the static types and classes without abstractions are still there as well.

A missing abstraction like an interface or an abstract base class means that the developers are unable to change the behavior of their own components by injecting new implementations into them - and with static types it is even worse, you can't inject them at all. Luckily, most of the time we don't have to and don't want to change all the behaviors of all components we use unless we want to unit test a component. To be able to unit test one, and only one, component we have to provide it with dependencies that are completely under our control. An abstraction serves this purpose.

More and more of our customers demand unit tests and some of them are using .NET Core to be able to run the applications on Windows and Linux. Unfortunately, there are no abstractions available supporting .NET Core or they do not have the design decisions I would like to work with.

Inspired by SystemWrapper and System.IO.Abstractions I decided to create Thinktecture.Abstractions with certain opinionated design decisions in mind.

Design decisions

Interfaces vs abstract classes

Both an interface and an abstract class have pros and cons when it comes to create an abstraction. By implementing an interface, we are sure that there is no code running besides ours. Furthermore, a class can implement more than one interface. With base classes we don't have that much flexibility but we are able to define members with different visibility and can implement implicit/explicit cast operators.

For Thinktecture.Abstractions I chose interfaces because of the flexibility and transparency. For example, if I start using base classes I could be inclined to use internal members preventing others to have access to some code. This approach would ruin the whole purpose of this project. Here is another example, imagine we are implementing a new stream because we are using interface the new stream can be both a Stream and a IStream. That means we don't even need to convert this stream back and forth when working with it. This would be impossible with a base class.

Example:

public class MyStream : Stream, IStream
{
    ...

}

Same signature

The abstractions have the same signature as the .NET types. The response type, not being a part of the signature by definition, is always an abstraction.

Example:

public interface IStringBuilder
{
    ...
    IStringBuilder Append(bool value);
}

Additionally, the methods with concrete types as arguments have overloads using abstractions, otherwise the developer is forced to make an unnecessary conversion just to pass the variable to the method.

Example:

public interface IMemoryStream : IStream
{
    ...

    
void WriteTo(IStream stream);
    void WriteTo(Stream stream);
}

Don't use reserved namespaces

The namespaces System.* and Microsoft.* should not be used to prevent collisions with types from the .NET team.

Conversion to abstraction

Conversion must not change the behavior or raise any exceptions. Using an extension method, we are able to convert a type without raising a NullReferenceException even if the .NET type is null. For easy usage the extension methods for all types are in namespace Thinktecture.

Example:

Stream stream = null;
IStream streamAbstraction = stream.ToInterface(); // streamAbstraction is null

Conversion back to .NET type

The abstractions offer a method to get the .NET type back to use it with other .NET classes and 3rd party components. The conversion must not raise any errors.

Example:

IStream streamAbstraction = ...
Stream stream = streamAbstraction.ToImplementation();

some3rdPartyComponent.Do(stream);

Support for .NET Standard Library (.NET Core)

The abstractions should not just support the traditional full-blown frameworks like .NET 4.5 and 4.6 but .NET Standard Library (.NET Core) as well.

Structure mirroring

The assemblies with abstractions are as small as the underlying .NET assemblies, i.e. Thinktecture.IO.Abstactions contains interfaces for types from System.IO only. Otherwise the abstractions will impose much more dependencies than the application actually needs.

The version of the supported .NET Standard Library of the abstractions is equal to the version of the underlying .NET assembly, e.g. Thinktecture.IO.Abstractions and System.IO support both .NET Standard 1.0.

Inheritance mirroring

The inheritance hierarchy of the interfaces is the same as the ones of the concrete types. For example, a DirectoryInfo derives from FileSystemInfo and so does the interface IDirectoryInfo extend IFileSystemInfo.

Adapters (Wrappers)

The adapters are classes that make .NET types compatible with the abstractions. Usually, there is no need to use them directly besides for setup of dependency injection in composition roots. The adapters are shipped with abstractions, i.e. in Thinktecture.IO.Abstractions are both the IStream and StreamAdapter. Moving the adapters into their own assembly can be considered as cleaner but not pragmatic because the extension method ToInterface() is using the adapter and it is virtually impossible to write components without the need to convert a .NET type to an abstraction.

Example:

// using the adapter directly
Stream stream = ...;
IStream streamAbstraction = new StreamAdapter(stream);

// preferred way
IStream streamAbstraction = stream.ToInterface();

No change in behavior

The adapters must not change the behavior of the invoked method or property nor raise any exception unless this exception is coming from the underlying .NET type.

Static members and constructor overloads

For easier use of adapters, they should provide the same static members and constructor overloads as the underlying type.

Example:

public class StreamAdapter : IStream
{
    public static readonly IStream Null;
    ...

}

public class FileStreamAdapter : IFileStream
{
    public FileStreamAdapter(string path, FileMode mode) { ... }
    public FileStreamAdapter(FileStream fileStream)  { ... }
    ...

}

Override methods of Object

The methods Equals, GetHashCode and ToString should be overwritten and the calls be delegated to the underlying .NET type. These methods often are used for comparison in collections like Dictionary<TKey, TValue> otherwise the adapter will change (or rather break) the behavior.

Missing parts (?)

Factories, Builders

The Thinktecture.Abstractions assemblies are designed to be as lean as possible without introduction of new components. Factories and builders can (and should) be built on top of these abstractions.

Mocks

There is no need for me to provide any mocks because there are very powerful libraries like Moq that can be used when testing.

Enhancements

In the near future there will be further abstractions like for HttpClient and components that are built on top of the abstractions and are offering improved API or behavior.

Summary

Working with abstractions gives us the possibility to decide what implementations should be used in our applications. Furthermore, it is easier (or possible in the first place - think of static classes) to provide and use new implementations, compose them and derive from them. When it comes to testing then we can do it without abstractions but we would test more than just one component leading to more complex tests and it would be rather integration tests than unit tests. The integration tests are slower and more difficult to setup because they could need access to the file system, the network or the database. Another (unnecessary) challenge would be to isolate the integration tests from each other because they run in parallel, in general. 


Entity Framework: Prevent redundant JOINs - watch your LINQ !

Fetching one record from a collection using navigational properties in Entity Framework may lead to unnecessary JOINs. To show the problem we need two tables Products and Prices.

EF Blog - Redundant Joins - DB Schema

The query shown below is fetching products along with their first price.

var products = ctx.Products
      .Select(p => new
      {
          p.Name,
          FirstPriceStartdate = p.Prices.OrderBy(price => price.Startdate).FirstOrDefault().Startdate,
          FirstPriceValue = p.Prices.OrderBy(price => price.Startdate).FirstOrDefault().Value,
      })
      .ToList();

Looks simple.
Lets look at the SQL statement or rather the execution plan.

EF Blog - Redundant Joins - Before Subselect

The table Prices is JOINed twice because of the two occurrences of the expression "p.Prices.OrderBy(...).FirstOrDefault()". The Entity Framework doesn't recognize that these expressions are identical but we can help. Just use a sub-select.

var products = ctx.Products
       .Select(p => new
       {
           Product = p,
           FirstPrice = p.Prices.OrderBy(price => price.Startdate).FirstOrDefault()
       })
      .Select(p => new
      {
          p.Product.Name,
          FirstPriceStartdate = p.FirstPrice.Startdate,
          FirstPriceValue = p.FirstPrice.Value,
      })
      .ToList();

That's it, the table Prices is JOINed only once now.

EF Blog - Redundant Joins - After Subselect

Having a complex query you may need multiple sub-select to select a navigational property of another navigational property. But in this case please write an email to your colleagues or a comment so the other developers understand what's going on otherwise your funny looking query will be refactored pretty soon :)  


Entity Framework: High performance querying trick using SqlBulkCopy and temp tables

Implementing database access with Entity Framework is pretty convenient, but sometimes the query performance can be very poor. Especially using navigational properties to load collections leads to significantly longer execution times and more I/O. To see the impact of the loading of a collection we have to take a look into profiling tools like SQL Server Profiler.

Let’s look at the following use case which was extrapolated from a customer project. We have three tables Products, Suppliers and Prices containing an entire price history.

Blog - EF - Using SqlBulkCopy and temp tables - DB

We want to select all products with their suppliers and future prices according to a filter criteria. The easiest approach is to use the navigational properties.

using(var ctx = new Entities())
{
    var products = ctx.Products
        .Where(p => p.Name.Contains(“chocolate”))
        .Select(p => new FoundProduct()
        {
            Id = p.Id,
            Name = p.Name,
            FuturePrices = p.Prices
                .Where(price => price.Startdate > DateTime.Today),
            Suppliers = p.Suppliers
                .Where(s => s.IsDeliverable)
    })
    .ToList();
}

For the simple looking query, depending on the complexity of the data and the amount of data in the database, the execution can take a while. There are multiple reasons the database won’t like this query. The Entity Framework has to make huge JOINs, concatenations and sorting to fetch the products, prices and suppliers at once thus the result set is much bigger than fetching the collections separately. Furthermore, it is more difficult to find optimal indexes because of the JOINs, the confusing execution plan and suboptimal SQL statements Entity Framework has to generate to fulfill our demands.

If you have been using EF you may be wondering why you didn't have this problem before. The answer is you didn't notice it because the tables or the result set have been small etc. Just assume an unoptimized query takes 200 ms, an optimized query 20 ms. Although one query is 10 times faster than the other both response times are considered 'fast' - and this often leads to assumptions that the query is perfect. Though, in reality the database needs much more resources to perform the unoptimized query. But that doesn't mean we have to change all our EF queries using navigational properties, be selective. Use profiling tools to decide what query should be tuned and what not.

Let's look at the execution plan for the query from above to get an idea what operator consumes the resources the most. Half of the resources are needed for sorting the data although we don't have any order-by clause in our query! The problem is that the data must have special sort order so the Entity Framework is able to process (materialize) the SQL result correctly.

Blog - EF - Using SqlBulkCopy and temp tables - Execution Plan

So, let's assume the result set is pretty big, the query takes too long and the profiling tool shows hundreds of thousands of reads that are needed to get our products.
The first approach would be to split the query. First we load the products, then the suppliers and the prices.

using(var ctx = new Entities())
{
    var products = ctx.Products
        .Where(p => p.Name.Contains(“chocolate”))
        .Select(p => new FoundProduct()
        {
            Id = p.Id,
            Name = p.Name
        })
        .ToList();

    var productIds = products.Select(p => p.Id);

    var futurePricesLookup = ctx.Prices
        .Where(p => productIds.Contains(p.ProductId))
        .Where(p => p.Startdate > DateTime.Today)
        .ToLookup(p => p.ProductId);

    var suppliersLookup = ctx.Suppliers
        .Where(s => productIds.Contains(s.ProductId))
        .Where(s => s.IsDeliverable)
        .ToLookup(p => p.ProductId);

    foreach(var product in products)
    {
        product.FuturePrices = futurePricesLookup[product.Id];
        product.Suppliers = suppliersLookup[product.Id];
    }   
}

Now we are going 3 times to the database but the result sets are a lot smaller, easier to profile and easier to find optimal indexes for. In a project of one of our customers the reads are gone from 500k down to 2k and the duration from 3 sec to 200 ms just by splitting the query.

For comparison using our simplified example with 100 products and 10k prices:

  • Original query needs 300 ms and has 240 reads
  • Split queries need (1+14+1) = 16 ms and has (2 + 115 + 4) =121 reads

 

This approach performs very well when the number of product IDs we use in the Where statement stays small, say < 50. But it isn't always the case.
Especially when implementing a data exporter we have to be able to handle thousands of IDs. Using that many parameters will slow down the query significantly. But what if we could insert all product IDs into a temporary table using SqlBulkCopy because with bulk copy there is almost no difference whether there are 100 IDs to insert or 10k. At first we want to create a few classes and methods to be able to bulk insert IDs of type Guid using just a few lines of code. The usage will look like this:

private const string TempTableName = "#TempTable";

using(var ctx = new Entities())
{
    // fetch products and the productIds

    RecreateTempTable(ctx);
    BulkInsert(ctx, null, TempTableName, () => new TempTableDataReader(productIds));

    // here come the queries for prices and suppliers
}

Before copying the IDs we need to create a temp table.

private void RecreateTempTable(Entities ctx)
{
    ctx.Database.ExecuteSqlCommand($@"
        IF(OBJECT_ID('tempdb..{TempTableName}') IS NOT NULL)
            DROP TABLE {TempTableName};

        CREATE TABLE {TempTableName}
        (
            Id UNIQUEIDENTIFIER NOT NULL PRIMARY KEY CLUSTERED
        );
    ");
}

The bulk insert is encapsulated into a generic method to be able to use it with all kind of data. The class BulkInsertDataReader<T> is a base class of mine to be able to implement the interface IDataReader very easily. The class can be found on GitHub: BulkInsertDataReader.cs

private void BulkInsert<T>(Entities ctx, DbContextTransaction tx, 
    string tableName, Func<BulkInsertDataReader<T>> getDatareader)
{
    SqlConnection sqlCon = (SqlConnection)ctx.Database.Connection;
    SqlTransaction sqlTx = (SqlTransaction)tx?.UnderlyingTransaction;

    using (SqlBulkCopy bulkCopy = new SqlBulkCopy(sqlCon, 
        SqlBulkCopyOptions.Default, sqlTx))
    {
        bulkCopy.DestinationTableName = tableName;
        bulkCopy.BulkCopyTimeout = (int)TimeSpan.FromMinutes(10).TotalSeconds;

        using (var reader = getDatareader())
        {
            foreach (var mapping in reader.GetColumnMappings())
            {
                bulkCopy.ColumnMappings.Add(mapping);
            }

            bulkCopy.WriteToServer(reader);
        }
    }
}

Using the generic BulkInsertDataReader we implement a data reader for inserting Guids.

public class TempTableDataReader : BulkInsertDataReader<Guid>
{
    private static readonly IReadOnlyCollection<SqlBulkCopyColumnMapping> _columnMappings;

    static TempTableGuidDataReader()
    {
        _columnMappings = new List<SqlBulkCopyColumnMapping>()
        {
            new SqlBulkCopyColumnMapping(1, "Id"),
        };
    }

    public TempTableGuidDataReader(IEnumerable<Guid> guids)
        : base(_columnMappings, guids)
    {
    }

    public override object GetValue(int i)
    {
        switch (i)
        {
            case 1:
                return Current;
            default:
                throw new ArgumentOutOfRangeException("Unknown index: " + i);
        }
    }
}

Now we have all IDs in a temporary table. Let’s rewrite the query from above to use JOINs instead of the method Contains.

using(var ctx = new Entities())
{
    // fetch products and the productIds
    // create temp table and insert the ids into it

    var futurePricesLookup = ctx.Prices
        .Join(ctx.TempTable, p => p.ProductId, t => t.Id, (p, t) => p)
        .Where(p => p.Startdate > DateTime.Today)
        .ToLookup(p => p.ProductId);

    var suppliersLookup = ctx.Suppliers
        .Join(ctx.TempTable, s => s.ProductId, t => t.Id, (s, t) => s)
        .Where(s => s.IsDeliverable)
        .ToLookup(p => p.ProductId);

    // set properties FuturePrices and Suppliers like before
}

Here the question comes up where the entity set TempTable comes from when you are using database-first approach? The answer is we need to edit the edmx file manually to introduce the temp table to Entity Framework. For that open the edmx file in an XML editor and copy the EntityContainer, EntityType and EntityContainerMapping at the right place like it is shown below.

Remark: The Entity Framework supports so called DefiningQuery we use to define the temp table but the EF-Designer of Visual Studio doesn't support this feature. The consequence of that is that some sections we define manually will be deleted after an update of the EF-model. In this case we need to revert these changes.

<edmx:Edmx Version="3.0">
    <edmx:Runtime>
        <!-- SSDL content -->
        <edmx:StorageModels>
            <Schema Namespace="Model.Store" Provider="System.Data.SqlClient">
                <EntityContainer Name="ModelStoreContainer">
                    <EntitySet Name="TempTable" EntityType="Self.TempTable">
                        <DefiningQuery>
                            SELECT #TempTable.Id
                            FROM #TempTable
                        </DefiningQuery>
                    </EntitySet>
                </EntityContainer>
                <EntityType Name="TempTable">
                    <Key>
                        <PropertyRef Name="Id" />
                    </Key>
                    <Property Name="Id" Type="uniqueidentifier" Nullable="false" />
                </EntityType>
            </Schema>
        </edmx:StorageModels>
        <!-- CSDL content -->
        <edmx:ConceptualModels>
            <Schema Namespace="Model" Alias="Self">
                <EntityContainer Name="Entities" annotation:LazyLoadingEnabled="true">
                    <EntitySet Name="TempTable" EntityType="Model.TempTable" />
                </EntityContainer>
                <EntityType Name="TempTabled">
                    <Key>
                        <PropertyRef Name="Id" />
                    </Key>
                    <Property Name="Id" Type="Guid" Nullable="false" />
                </EntityType>
            </Schema>
        </edmx:ConceptualModels>
        <!-- C-S mapping content -->
        <edmx:Mappings>
            <Mapping Space="C-S">
                <EntityContainerMapping StorageEntityContainer="ModelStoreContainer" CdmEntityContainer="Entities">
                    <EntitySetMapping Name="TempTable">
                        <EntityTypeMapping TypeName="Model.TempTable">
                            <MappingFragment StoreEntitySet="TempTable">
                                <ScalarProperty Name="Id" ColumnName="Id" />
                            </MappingFragment>
                        </EntityTypeMapping>
                    </EntitySetMapping>
                </EntityContainerMapping>
            </Mapping>
        </edmx:Mappings>
    </edmx:Runtime>
</edmx:Edmx>

That’s it. Now we are able to copy thousands of records into a temp table very fast and use this data for JOINs.


Mimicking $interpolate: An Angular 2 interpolation service

In an Angular 1 application we have been creating for one of our customers we used the $interpolate service to build a simple templating engine. The user was able to create snippets with placeholders within the web application to use these message fragments to compose an email to reply to a support request.

In Angular 2 there is no such service like $interpolate - but that is not a problem because we have got abstract syntax tree (AST) parsers to build our own interpolation library. Let’s build a component that takes a format string (with placeholders) and an object with properties to be used for replacement of the placeholders. The usage looks like this:

// returns ‘Hello World!’
interpolation.interpolate(‘Hello {{place.holder}}!’, { place: { holder: ‘World!’}});

At first we need to inject the parser from Angular 2 and we need to create a lookup to cache our interpolations.

constructor(parser: Parser) {
    this._parser = parser;
    this._textInterpolations = new Map<string, TextInterpolation>();
}

The class TextInterpolation is just a container for saving the parts of a format string. To get the interpolated string we need to call the function interpolate. The example from above will have 2 parts:

  • String 'Hello '
  • Property getter for {{place.holder}}

 

class TextInterpolation {
    private _interpolationFunctions: ((ctx: any)=>any)[];

    constructor(parts: ((ctx: any) => any)[]) {
        this._interpolationFunctions = parts;
    }

    public interpolate(ctx: any): string {
        return this._interpolationFunctions.map(f => f(ctx)).join('');
    }
}

Before we can create our TextInterpolation we need to parse the format string to get an AST.

let ast = this._parser.parseInterpolation(text, null);

if (!ast) {
    return null;
}

if (ast.ast instanceof Interpolation) {
    textInterpolation = this.buildTextInterpolation( ast.ast);
} else {
    throw new Error(`The provided text is not a valid interpolation. Provided type ${ast.ast.constructor && ast.ast.constructor['name']}`);
}

The AST of type Interpolation has 2 collections, one with strings and the other with expressions. Our interpolation service should support property-accessors only, i.e. no method calls or other operators.

private buildTextInterpolation(interpolation: Interpolation): TextInterpolation {
    let parts: ((ctx: any) => any)[] = [];

    for (let i = 0; i < interpolation.strings.length; i++) {
        let string = interpolation.strings[i];

        if (string.length > 0) {
            parts.push(ctx => string);
        }

        if (i < interpolation.expressions.length) {
            let exp = interpolation.expressions[i];

            if (exp instanceof PropertyRead) {
                var getter = this.buildPropertyGetter(exp);
                parts.push(this.addValueFormatter(getter));
            } else {
                throw new Error(`Expression of type ${exp.constructor && exp.constructor.name1} is not supported.`);
            }
        }
    }

    return new TextInterpolation(parts);
};

The strings don’t need any special handling but the property getters do. The first part of the special handling happens in the method buildPropertyGetter that fetches the value of the property (and the sub property) of an object.

private buildPropertyGetter(exp: PropertyRead): ((ctx: any) => any) {
    var getter: ((ctx: any) => any);

    if (exp.receiver instanceof PropertyRead) {
        getter = this.buildPropertyGetter(exp.receiver);
    } else if (!(exp.receiver instanceof ImplicitReceiver)) {
        throw new Error(`Expression of type ${exp.receiver.constructor && (exp.receiver).constructor.name} is not supported.`);
    }

    if (getter) {
        let innerGetter = getter;
        getter = ctx => {
            ctx = innerGetter(ctx);
            return ctx && exp.getter(ctx);
        };
    } else {
        getter = <(ctx: any)=>any>exp.getter;
    }

    return ctx => ctx && getter(ctx);
}

The second part of the special handling is done in addValueFormatter that returns an empty string when the value returned by the property getter is null or undefined because these values are not formatted to an empty string but to strings 'null' and 'undefined', respectively.

private addValueFormatter(getter: ((ctx: any) => any)): ((ctx: any) => any) {
    return ctx => {
        var value = getter(ctx);

        if (value === null || _.isUndefined(value)) {
            value = '';
        }

        return value;
    }
}

The interpolation service including unit tests can be found on GitHub: angular2-interpolation


.NET Core: Lowering the log level of 3rd party components

With the new .NET Core framework and libraries we have got an interface called Microsoft.Extensions.Logging.ILogger to be used for writing log messages. Various 3rd party and built-in components make very good use of it. To see how much is being logged just create a simple Web API using Entity Framework (EF) and the Kestrel server and in a few minutes you will get thousands of log messages.

The downside of such a well-known interface is that the log level chosen by the 3rd party developers may be unfitting for the software using it. For example, Entity Framework uses the log level Information for logging generated SQL queries. For the EF developers it is a good choice because the SQL query is an important information for them - but for our customers using EF this information is for debugging purposes only.

Luckily it is very easy to change the log level of a specific logging source (EF, Kestrel etc.). For that we need a simple proxy that implements the interface ILogger. The proxy is changing the log level to Debug in the methods Log and IsEnabled and calls the corresponding method of the real logger with new parameters.

public class LoggerProxy : ILogger
{
	private readonly ILogger _logger;

	public LoggerProxy(ILogger logger)
	{
		if (logger == null)
			throw new ArgumentNullException(nameof(logger));

		_logger = logger;
	}

	public void Log(LogLevel logLevel, int eventId, object state, 
		Exception exception, Func<object, Exception, string> formatter)
	{
		if (logLevel > LogLevel.Debug)
			logLevel = LogLevel.Debug;

		_logger.Log(logLevel, eventId, state, exception, formatter);
	}

	public bool IsEnabled(LogLevel logLevel)
	{
		if (logLevel > LogLevel.Debug)
			logLevel = LogLevel.Debug;

		return _logger.IsEnabled(logLevel);
	}

	public IDisposable BeginScopeImpl(object state)
	{
		return _logger.BeginScopeImpl(state);
	}
}

To inject the LoggerProxy we have to create another proxy that implements the interface Microsoft.Extensions.Logging.ILoggerFactory. The method we are interested in is CreateLogger that gets the category name as a parameter. The category name may be the name of the class requesting the logger or the name of the assembly. In this method we make the real logger factory create a logger for us and if this logger is for Entity Framework we return our LoggerProxy wrapping the real logger.

public class LoggerFactoryProxy : ILoggerFactory
{
	private readonly ILoggerFactory _loggerFactory;
	
	public LogLevel MinimumLevel
	{
		get { return _loggerFactory.MinimumLevel; }
		set { _loggerFactory.MinimumLevel = value; }
	}

	public LoggerFactoryProxy(ILoggerFactory loggerFactory)
	{
		if (loggerFactory == null)
			throw new ArgumentNullException(nameof(loggerFactory));

		_loggerFactory = loggerFactory;
        }

	public ILogger CreateLogger(string categoryName)
	{
		var logger = _loggerFactory.CreateLogger(categoryName);

		if (categoryName.StartsWith("Microsoft.Data.Entity.", StringComparison.OrdinalIgnoreCase))
			logger = new LoggerProxy(logger);

		return logger;
        }

	public void AddProvider(ILoggerProvider provider)
	{
		_loggerFactory.AddProvider(provider);
	}

	public void Dispose()
        {
		_loggerFactory.Dispose();
	}
}

Finally, we need to register the factory proxy with the dependency injection container.

public void ConfigureServices(IServiceCollection services)
{
	var factory = new LoggerFactoryProxy(new LoggerFactory());
	services.AddInstance(factory);
}

For now on the log messages coming from Entity Framework will be logged with the log level Debug.


AngularJS: Dynamic Directives

In this post, we will look into an approach for exchanging the definition of an AngularJS directive, i.e. the template, controller, compile/link functions etc., after the application has been bootstrapped whereby carrying out a full reload is not an option.

Assume that you have an application that allows the user to have multiple accounts to switch between. Depending on the currently active account, the application establishes a connection to different servers that in turn have different definitions for the same AngularJS directive.

Here is a simplified example:

<!-- the user is not logged in => show nothing (or some default content) -->
<my-directive message="'Hello World'" />

---------------------------------------------------------------------------------

<!-- the user is connected to “server A” => fetch and apply the directive definition delivered by “server A” 
{
	restrict: 'E',
	template: '<div>Coming from Server A</div>'
};
-->
<my-directive>
	<div>Coming from the Server A</div>
</my-directive>

---------------------------------------------------------------------------------

<!-- the user is connected to “server B” => fetch and apply the directive delivered by “server B”
{
	restrict: 'E',
	scope: {
		message: '='
	},
	template: '<span>And now from Server B: {{message}}</span>'
};
-->
<my-directive>
	<span>And now from Server B: Hello World</span>
</my-directive>

To be able to exchange the entire definition of an AngularJS directive after the application has started we need to address the following problems:

  • Lazy loading
  • Directive definition exchange
  • On-demand recompilation

Let’s have a look at each point in more detail now.

1) Lazy loading

Problem: The usual way to register a directive does not work after the application has bootstrapped.

// the usual way to register a directive
angular.module('app').directive('myDirective', MyDirective);

For the registration of an AngularJS directive after the application has started, we need the $compileProvider. We can get a hold of the $compileProvider during the configuration phase, and save the reference somewhere we get access to later, like in a service (in our example it will be the dynamicDirectiveManager).

// grab the $compileProvider
angular.module('app')
	.config(function ($compileProvider, dynamicDirectiveManagerProvider) {
		dynamicDirectiveManagerProvider.setCompileProvider($compileProvider);
	});

// Later on, we are able to register new directives using the $compileProvider
$compileProvider.directive.apply(null, [name, constructor]);

By using the $compileProvider we are now able to lazy-load directives.

2) Directive definition exchange

Problem: Re-registering a directive using the same name but different definition (i.e. template, controller, etc.) does not work.

$compileProvider.directive
	.apply(null, [ 'myDirective', function() { return { template: 'Template A', … } } ]);

// … some time later …
$compileProvider.directive
	.apply(null, [ 'myDirective', function() { return { template: 'Other template', … } } ]); 
// the previous statement won’t overwrite the directive

Due to caching in AngularJS, the directives that we are trying to overwrite are not going to be exchanged by a new one. To remedy this problem, we have no other choice but to change the name in some way, for example by appending a suffix. Luckily, we can hide this renaming in the previously mentioned dynamicDirectiveManager.

// will compile to <my-directive-optionalsuffix>
dynamicDirectiveManager.registerDirective('myDirective', function() { return { template: 'Template', … } }, 'optionalsuffix');

// … some time later …
// will compile to <my-directive-randomsuffix>
dynamicDirectiveManager.registerDirective('myDirective', function() { return { template: 'Other template', … } });

3) On-demand recompilation

Problem: Now, we are able to exchange a directive definition by a new one but the corresponding directives on our HTML page will not recompile themselves, especially if the directives (except for markup in the page) did not exist at all a moment ago.

To be able to recompile the directives on demand, the desired directive will be created by another one (say <dynamic-directive>) that we have full control over. That way we can call $compile() every time a directive has been overwritten.

<!-- Remark: the attribute “message” has no meaning for the <dynamic-directive> but for <my-directive> -->

<!-- dynamic directive … -->
<dynamic-directive element-name="my-directive" message="'Hello World'"></dynamic-directive>
<!-- or -->
<dynamic-directive element-name="{{getDirectiveName()}}" message="'Hello World'"></dynamic-directive>

<!-- … will initially compile into something like … -->
<dynamic-directive element-name="my-directive" message="'Hello World'">
	<my-directive message="'Hello World'" />
</dynamic-directive>

<!-- … and after a registration of a new directive definition … -->
<dynamic-directive element-name="my-directive" message="'Hello World'">
	<my-directive-someprefix message="'Hello World'" />
</dynamic-directive>

<!-- from now on, the <my-directive> is on its own, at least until the next exchange of the directive definition … -->

By using the $compile service, we solved one problem but created a memory leak. If the inner directive (<my-directive>) requests an isolated or child scope, then we get a deserted scope on each recompile thereby slowing the whole application bit by bit.

To solve this issue, we need to check whether the scope of the inner directive is different than the scope of the <dynamic-directive>. If so, then the inner scope will be disposed of by calling $destroy().

var innerScope = currentInnerElement.isolateScope() || currentInnerElement.scope();

if (innerScope && (innerScope !== scope)) {
	innerScope.$destroy();
}

Voilà!

Conclusion

This is a quite special case and it requires quite some code just to overwrite a directive without restarting the application. Luckily, the bulk of the work is done either by the <dynamic-directive> or by dynamicDirectiveManager.

Live working example

http://jsfiddle.net/Pawel_Gerr/y22ZK/