Entity Framework Core 3.0: "Hidden" GROUP BY Capabilities (Part 2)

In the previous blog post we used a navigational property to work around the limitations of the extension method GroupBy. The problem is, there is not always such a property we can use for grouping. Especially, when following domain driven design practices, a bidirectional navigation is undesirable to not to pollute the domain model (too much). Besides not introducing a navigational property intentionally there might be use cases when we have to group data by property such as the name of a record. In this case there simply cannot be a navigational property.

In this post we look at LINQ queries that behave in the same way as the navigational property we used in the previous post:

var firstProducts = Context.ProductGroups
.Select(g => g.Products.OrderBy(p => p.Name).FirstOrDefault())
.ToList();

Let's achieve the same without using the property Products. Yet again, we start from the product groups but instead of using the navigational property we use the DbSet Products. The join condition between a group and the products is specified inside of the extension method FirstOrDefault.

var firstProducts = Context.ProductGroups
.Select(g => Context.Products
.OrderBy(p => p.Name)
.FirstOrDefault(p => p.GroupId == g.Id))
.ToList();

The generated SQL looks like the following:

SELECT [t0].*
FROM
ProductGroups AS p
LEFT JOIN
(
SELECT *
FROM
(
SELECT *, ROW_NUMBER() OVER(PARTITION BY p0.GroupId ORDER BY p0.Name) AS row
FROM [Products] AS [p0]
) AS [t]
WHERE [t].[row] <= 1
) AS t0 ON p.Id = t0.GroupId

The SQL statement is identical to the one that was generated when using the navigational property.

Now, imagine there is no DbSet ProductGroups and the property GroupId is just a regular property like Name. As a replacement for missing ProductGroups we use SELECT DISTINCT GroupId on the Products. The 2nd part of the query stays the same.

var firstProducts = Context.Products
.Select(p => p.GroupId)
.Distinct()
.Select(id => Context.Products
.OrderBy(p => p.Name)
.FirstOrDefault(p => p.GroupId == id))
.ToList();

The generated SQL statement is:

SELECT t1.*
FROM
(
SELECT DISTINCT p.GroupId
FROM Products AS p
) AS t
LEFT JOIN (
SELECT t0.*
FROM (
SELECT p0.*, ROW_NUMBER() OVER(PARTITION BY p0.GroupId ORDER BY p0.Name) AS row
FROM Products AS p0
) AS t0
WHERE t0.row <= 1
) AS t1 ON t.GroupId = t1.GroupId

 

For grouping of data using built-in capabilities of Entity Framework Core 3.0 should be enough for most cases. For more advanced use cases we can't do much but extending EF with custom functionality.

Demos: NavigationPropertiesAlternativeQueriesDemo.cs

 

 


Entity Framework Core 3.0: "Hidden" GROUP BY Capabilities (Part 1)

With Entity Framework Core 3.0 (EF) the internal implementation of the LINQ query translation has been changed a lot. So, some queries that didn't work previously are working now and some that worked are not working anymore. :)

The LINQ extension method GroupBy is a special one, sometimes it works and sometimes it throws an exception. One of the use cases that are supported is the usage of an aggregate function right after calling GroupBy:

var productCount = Context.Products
.GroupBy(p => p.GroupId)
.Select(g => new
{
GroupId = g.Key,
Count = g.Count()
})
.ToList();

The generated SQL statement is the one we expect:

SELECT p.GroupId, COUNT(*) AS Count
FROM Products AS p
GROUP BY p.GroupId

The previous use case is simple and very limited in its usefulness. A use case that I see in the projects more often is something like: "Give me the first/last product for each group ordered by columns x, y". The query that comes to mind first is:

var firstProducts = Context.Products
.GroupBy(p => p.GroupId)
.Select(g => g.OrderBy(p => p.Name).FirstOrDefault())
.ToList();

Looks good but leads to an InvalidOperationException. To get the desired result without using the method GroupBy is to start the LINQ query with ProductGroups and use the navigational property Products.

var firstProducts = Context.ProductGroups
.Select(g => g.Products.OrderBy(p => p.Name).FirstOrDefault())
.ToList();

Remarks: If a group has no products then null is going to be pushed into our list firstProducts. Most of the time these nulls are of no use and should be filtered out.

The generated SQL is using the window function ROW_NUMBER for grouping.

SELECT t0.*
FROM
ProductGroups AS p
LEFT JOIN
(
SELECT *
FROM
(
SELECT *, ROW_NUMBER() OVER(PARTITION BY p0.GroupId ORDER BY p0.Name) AS row
FROM Products AS p0
) AS t
WHERE t.row <= 1
) AS t0 ON p.Id = t0.GroupId

 

In the end we can say that the support of GroupBy has been improved in version 3.0. The queries that have been evaluated on the client are now translated to SQL and executed on the database. The only drawback is that we need to rewrite our LINQ queries sometimes.

 

 

 


Entity Framework Core: Getting more Deadlock Information with named Transactions

Whether in an application in production or in integration tests on CI, the deadlocks are virtually unavoidable. The behavior of a deadlock depends on the database (MS SQL Server, MySQL, etc)  and the isolation level (like Snapshot Isolation). Some of the databases are blocking and some of them are not. For example, a deadlock in Microsoft SQL Server is blocking and means that there are at least 2 transactions waiting for each to complete. In the end, one of them is going to "win" and the other is considered a "deadlock victim" and will be aborted by the database. The deadlock victim is easy to spot, it is the one that throws an exception but what about the winner? What is the other transaction the deadlock victim was competing with?

One option is to capture a deadlock graph by using Extended Events, SQL Profiler or in case the database is in Azure then by querying the master database. A deadlock graph itself is represented as XML and can be visualized by tools like the profiler.

Annotation 2019-10-24 185633

In the picture above the deadlock victim is on the left and the transaction that won is on the right. Let's figure out what code is responsible for the transaction on the right side.

First, we can try to find out the responsible Entity Framework Core (EF) query by looking at the SQL statement. Sometimes it is more than enough sometimes it isn't because the queries are not written by the developers but generated by EF. Another approach is to use named transactions because the transaction names are part of the deadlock graph.

By default, there is no method overload for BeginTransaction that takes a name because this feature is not supported by all databases, so let's build an extension method for the SQL Server first.

public static class DatabaseFacadeExtensions
{
public static IDbContextTransaction BeginTransaction(this DatabaseFacade database,
string name)
{
database.OpenConnection();

var connection = (SqlConnection)database.GetDbConnection();
var transaction = connection.BeginTransaction(name);

return database.UseTransaction(transaction);
}
}

The usage of the extension method is no different than without the name

using(var tx = myDbContext.Database.BeginTransaction("Product Update"))
{
...
}

Let's look at the XML of the deadlock graph after we provided the transaction name.

<deadlock-list>
<deadlock victim="process1dba0047c28">
<process-list>
<process id="process1dba0057468" spid="55" transactionname="Product Update" ...>
<executionStack>
<inputbuf>
(@p1 uniqueidentifier,@p0 nvarchar(4000),@p2 varbinary(8));
UPDATE [Products] SET [Name] = @p0
WHERE [Id] = @p1 AND [RowVersion] = @p2;
SELECT [RowVersion]
FROM [Products]
WHERE @@ROWCOUNT = 1 AND [Id] = @p1;
...

Having the transaction name we know exactly what part of our code we have to look at.

Need a deadlock? The demo is on github: NamedTransactionsDemo.cs


Entity Framework Core: Beware of Lifetime Extension of TransactionScopes using C# 8.0

In the blog post Entity Framework Core: Use TransactionScope with Caution! we saw a few cases we have to be aware of when using TransactionsScopes. Here is another one that can lead to an exception when using the new C# 8.0 feature: using declarations.

First, we look at a code snippet without C# 8.0 feature:

using (var scope = new TransactionScope())
{
scope.Complete();
}

myDbContext.Products.ToList();

Which doesn't behave the same if we blindly apply the new C# 8.0 feature:

using var scope = new TransactionScope();

scope.Complete();

// System.InvalidOperationException: The current TransactionScope is already complete.
myDbContext.Products.ToList();

because it is not enough to Complete the TransactionScope to be able to use the underlying database connection for further queries, the TransactionScope must be disposed first. In the 2nd code snippet the ToList() is executed after Complete but before Dispose.

The corresponding code without using declaration looks like as following:

using (var scope = new TransactionScope())
{
scope.Complete();

// System.InvalidOperationException: The current TransactionScope is already complete.
myDbContext.Products.ToList();
}

This may be unexpected because the TransactionScope behaves differently than the regular IDbContextTranslation started with myDbContext.Database.BeginTransaction().  So this one does not lead to an error:

using var tx = ArrangeDbContext.Database.BeginTransaction();

tx.Commit();

myDbContext.Products.ToList();

 


Entity Framework Core: Include Filters

The Entity Framework Core (EF) extension method Include provides us the ability to load additional data besides the entities we are querying for. For example: loading products along with their translations.

var products = Context.Products
.Include(p => p.Translations)
.ToList();

--------------------------------------------------------------

public class Product
{
public Guid Id { get; set; }
public string Name { get; set; }
public List<ProductTranslation> Translations { get; set; }
}

public class ProductTranslation
{
public Guid ProductId { get; set; }
public string Locale { get; set; }
public string Description { get; set; }
}

In some use cases we want to load all translations for the requested products and in some cases we don't. There are plenty of times when we need translations for a specific locale (i.e. language) only but at the moment there is no built-in support for providing filters to entities being included. You may go to the corresponding github issue#1833 and vote for this feature to increase the priority of it.

Some of our options are:

Entity Framework Plus

Not much to say here. Install the nuget package and use the extension method IncludeFilter.

var products = Context.Products
.IncludeFilter(p => p.Translations.Where(t => t.Locale == "en"))
.ToList();

Please note that the generated queries will be different than when using built-in features of EF! To be aware of that is helpful especially during performance optimization, like finding appropriate database indexes.

Global Query Filters

The global query filters were introduced in EF 2.0 and are primarily for realization of features like soft delete and multi-tenancy. Although the filters are coupled to the entity type(s) and not to specific queries, still, we may enable and disable them at any time. Furthermore, despite the name the filters don't have to be "global", i.e. we can enable them for a specific instance of a DbContext only and not for all of them.

First, we introduce a variable that will be our filter and configure the corresponding query filter.

public class DemoDbContext : DbContext
{
private string _localeFilter;

protected override void OnModelCreating(ModelBuilder modelBuilder)
{
...

modelBuilder.Entity<ProductTranslation>()
.HasQueryFilter(t => _localeFilter == null
|| t.Locale == _localeFilter);

}

For setting the filter we could make the variable _localeFilter public but the API would not be that nice and robust, that's why ...

The method for enabling/setting the filter comes next.

   // we are still in DemoDbContext
public IDisposable SetTranslationFilter(string locale)
{
if (locale == null)
throw new ArgumentNullException(nameof(locale));
if (_localeFilter != null)
throw new InvalidOperationException($"Changing a filter is not allowed.");

_localeFilter = locale;

return new FilterReset(this);
}

SetTranslationFilter is returning an IDisposable to not to forget to reset the filter. The FilterReset is a nested private struct inside DemoDbContext.

   private readonly struct FilterReset : IDisposable
{
private readonly DemoDbContext _ctx;

public FilterReset(DemoDbContext ctx)
{
_ctx = ctx ?? throw new ArgumentNullException(nameof(ctx));
}

public void Dispose()
{
_ctx._localeFilter = null;
}
}

We are done with the implementation of the filter. The usage is looks as following: 

using (Context.SetTranslationFilter(locale))
{
var products = Context.Products
.Include(p => p.Translations)
.ToList();
}

When using the global query filters in that manner then it is impossible that this filter is applied to other queries by accident because (1) the filter is bound to a specific instance of a DbContext and (2) a DbContext is not thread-safe so it must not be used in multiple threads concurrently.

 

 


ASP.NET Core 3.0: Custom JsonConverter for the new System.Text.Json

With the introduction of ASP.NET Core 3.0 the default JSON serializer has been changed from Newtonsoft.Json to System.Text.Json. For projects and libraries switching to the new JSON serializer this change means more performance and the opportunity to rewrite our JsonConverters.

Serialization of concrete classes

Let's start with a simple one that can (de)serialize a concrete class Category. In our example we (de)serialize the property Name only.

public class Category
{
public string Name { get; }

public Category(string name)
{
Name = name;
}
}

To implement a custom JSON converter we have to derive from the generic class JsonConverter<T> and to implement 2 methods: Read and Write.

public class CategoryJsonConverter : JsonConverter<Category>
{
public override Category Read(ref Utf8JsonReader reader,
Type typeToConvert,
JsonSerializerOptions options)
{
var name = reader.GetString();

return new Category(name);
}

public override void Write(Utf8JsonWriter writer,
Category value,
JsonSerializerOptions options)
{
writer.WriteStringValue(value.Name);
}
}

The method Read is using the Utf8JsonReader to fetch a string, i.e. the name, and the method Write is writing a string using an instance of Utf8JsonWriter.

In both cases (i.e. during serialization and deserialization) the converter is not being called if the value is null so I skipped the null checks. The .NET team doesn't do null checks either, see JsonKeyValuePairConverter<TKey, TValue>.

Let's test the new JSON converter. For that we create an instance of JsonSerializerOptions and add our CategoryJsonConverter to the Converters collection. Next, we use the static class JsonSerializer to serialize and to deserialize an instance of Category.

Category category = new Category("my category");

var serializerOptions = new JsonSerializerOptions
{
    Converters = { new CategoryJsonConverter() }
};

// json = "my category"
var json = JsonSerializer.Serialize(category, serializerOptions);

// deserializedCategory.Name = "my category"
var deserializedCategory = JsonSerializer.Deserialize<Category>(json, serializerOptions);

Serialization of generic classes

The next example is slightly more complex. The property we are serializing is a generic type argument, i.e. we can't use methods like reader.GetString() or writer.WriteStringValue(name) because we don't know the type at compile time.

In this example I've changed the class Category to a generic type and renamed the property Name to Key:

public class Category<T>
{
public T Key { get; }

public Category(T key)
{
Key = key;
}
}

For serialization of the generic property Key we need to fetch a JsonSerializer<T> using the instance of JsonSerializerOptions.

public class CategoryJsonConverter<T> : JsonConverter<Category<T>>
{
public override Category<T> Read(ref Utf8JsonReader reader,
Type typeToConvert,
JsonSerializerOptions options)
{
var converter = GetKeyConverter(options);
var key = converter.Read(ref reader, typeToConvert, options);

return new Category<T>(key);
}

public override void Write(Utf8JsonWriter writer,
Category<T> value,
JsonSerializerOptions options)
{
var converter = GetKeyConverter(options);
converter.Write(writer, value.Key, options);
}

private static JsonConverter<T> GetKeyConverter(JsonSerializerOptions options)
{
var converter = options.GetConverter(typeof(T)) as JsonConverter<T>;

if (converter is null)
throw new JsonException("...");

return converter;
}
}

The behavior of the generic JSON converter is the same as before especially if the Key is of type string.

Deciding the concrete JSON converter at runtime

Having several categories with different key types, say, string and int, we need to register them all with the JsonSerializerOptions.

var serializerOptions = new JsonSerializerOptions
{
Converters =
{
new CategoryJsonConverter<string>(),
new CategoryJsonConverter<int>()
}
};

If the number of required CategoryJsonConverters grows to big or the concrete types of the Key are not known at compile time then this approach is not an option. To make this decision at runtime we need to implement a JsonConverterFactory. The factory has 2 method: CanConvert(type) that returns true if the factory is responsible for the serialization of the provided type; and CreateConverter(type, options) that should return an instance of type JsonConverter.

public class CategoryJsonConverterFactory : JsonConverterFactory
{
public override bool CanConvert(Type typeToConvert)
{
if (!typeToConvert.IsGenericType)
return false;

var type = typeToConvert;

if (!type.IsGenericTypeDefinition)
type = type.GetGenericTypeDefinition();

return type == typeof(Category<>);
}

public override JsonConverter CreateConverter(Type typeToConvert,
JsonSerializerOptions options)
{
var keyType = typeToConvert.GenericTypeArguments[0];
var converterType = typeof(CategoryJsonConverter<>).MakeGenericType(keyType);

return (JsonConverter)Activator.CreateInstance(converterType);
}
}

Now, we can remove all registrations of the CategoryJsonConverter<T> from the options and add the newly implemented factory.

Category<int> category = new Category<int>(42);

var serializerOptions = new JsonSerializerOptions
{
    Converters = { new CategoryJsonConverterFactory() }
};

// json = 42
var json = JsonSerializer.Serialize(category, serializerOptions);

// deserialized.Key = 42
var deserialized = JsonSerializer.Deserialize<Category<int>>(json, serializerOptions);

 

In the end the implementation of a custom converter for System.Text.Json is very similar to the one for Newtonsoft.Json. The biggest difference here is the non-existence of a non-generic JsonConverter but for that we've got the JsonConverterFactory.

Actually, there is a non-generic JsonConverter which is the base class of the JsonConverter<T> and the JsonConverterFactory but we cannot (and should not) use this class directly because its constructor is internal.

 


C# 8: Nullable Reference Types in .NET Standard < 2.1 and .NET 4.x

By switching the LangVersion to 8.0 and setting Nullable to enable in the csproj-file we are now benefiting from the nullable reference types introduced with the latest version of C#. By enabling this new feature all type members, input and output parameters are considered to be not-null. If some members or parameters, like string value can be null then we need to put a ? at the end of the type: string? value.

For most use cases putting a ? is enough but sometimes we need more control over when a parameter is null and when it isn't. To do so we can use one of the new attributes from the namespace System.Diagnostics.CodeAnalysis, some of them are NotNullWhen and NotNullIfNotNull.

Here is an example to make the benefits of one of the attributes more clear:

[return: NotNullIfNotNull("value")]
public static string? Reverse(this string? value)
{
if (value == null)
return null;

return new String(Enumerable.Reverse(value).ToArray());
}

The NotNullIfNotNull states that if the parameter value is not null then the output will be not null as well. Thanks to the annotation we get a corresponding warning when trying to access a null reference.

string value1 = null;
var firstChar1 = value1.Reverse()[0]; // Warn: Dereference of a possibly null reference.

string value2 = "123";
var firstChar2 = value2.Reverse()[0]; // No warning

The problem is, this attributes are available in projects referencing .NET Standard 2.1 or .NET Core 3.0. So, having a class library targeting multiple frameworks like .NET Standard 2.0, .NET Standard 2.1 and .NET 4.8 we are kind of limited to the ? only, or so it seems.

The solution looks the same as with Jetbrains.Annotations, we copy the code of the required attributes into our project and make them internal:

#if !NETSTANDARD2_1

namespace System.Diagnostics.CodeAnalysis
{
[AttributeUsage(AttributeTargets.Property | AttributeTargets.Parameter | AttributeTargets.ReturnValue, AllowMultiple = true)]
internal sealed class NotNullIfNotNullAttribute : Attribute
{
public string ParameterName { get; }

public NotNullIfNotNullAttribute(string parameterName)
{
ParameterName = parameterName;
}
}
}
#endif

The null-checks are now available in projects targeting the "older" platforms as well.


Entity Framework Core: Making RowNumber (more) usefull

In the blog post ROW_NUMBER Support we saw how to implement a custom function and in the previous post Improved Value Conversion Support we realized that inserting and selecting custom types is one thing but using them for filtering is something totally different.

Let's take a query from one of the previous posts and add a WHERE clause:

var query = dbContext.OrderItems
.Select(i => new
{
...,
RowNumber = EF.Functions.RowNumber(i.ProductId)
})
.Where(i => i.RowNumber == 1);

When executing the query we get a SqlException because the SQL statement is not valid.

SELECT
...,
ROW_NUMBER() OVER(ORDER BY i.ProductId) AS RowNumber
FROM
OrderItems AS i
WHERE
ROW_NUMBER() OVER(ORDER BY i.ProductId) = CAST(1 AS bigint)

The ROW_NUMBER is not just in SELECT but in WHERE as well because EF cannot know that the main query should be put into a sub query before accessing RowNumber, i.e. something like:

SELECT ...
FROM
(
SELECT
...,
ROW_NUMBER() OVER(ORDER BY i.ProductId) AS RowNumber
FROM
OrderItems AS i
) t
WHERE
t.RowNumber = CAST(1 AS bigint)

Probably, the easiest way is to introduce a method that gives EF a hint that the previous query should be a sub query. Something like:

var query = dbContext.OrderItems
.Select(i => new
{
...,
RowNumber = EF.Functions.RowNumber(i.ProductId)
})
.AsSubQuery()
.Where(i => i.RowNumber == 1);

Fortunately, we don't have do much because internally the method AsQueryable (or rather the expression associated with it) does just that. We will just (shamelessly ab)use it:

public static class MyQueryableExtensions
{
private static readonly MethodInfo _asQueryableMethodInfo
= typeof(Queryable)
.GetMethods(BindingFlags.Public | BindingFlags.Static)
.Single(m => m.Name == nameof(Queryable.AsQueryable)
&& m.IsGenericMethod);

public static IQueryable<TEntity> AsSubQuery<TEntity>(
this IQueryable<TEntity> source)
{
if (source == null)
throw new ArgumentNullException(nameof(source));

if (!(source.Provider is EntityQueryProvider))
return source;

var methodCall = Expression.Call(
null,
_asQueryableMethodInfo.MakeGenericMethod(typeof(TEntity)),
source.Expression);

return source.Provider.CreateQuery<TEntity>(methodCall);
}
}

Having the method AsSubQuery we get the expected results.

 

 

 


Entity Framework Core: Improved Value Conversion Support

Entity Framework Core (EF) 2.1 introduced a new feature called Value Conversion. Now, we are able to map custom .NET types to a type the database understands and vice versa. This long-awaited feature is especially popular among software engineers following the domain driven design (DDD) patterns.

At the moment the value conversion has some limitations that will be removed in the future. The limitation I want to address in this post is the fallback to client-side evaluation in some fairly simple but rather frequently needed use case.

As an example I will use an entity containing the property of type System.Drawing.Color:

public class Product
{
...
public Color Color { get; set; }
}

And the corresponding value conversion:

modelBuilder.Entity<Product>()
.Property(p => p.Color)
.HasConversion(color => color.Name, name => Color.FromName(name));

With provided conversion, only the name of the color is saved and when the name comes back from the database then it is mapped to an instance of Color.

Having the converter we are now able to insert, update and select the property Color.

DemoDbContext ctx = ...;

// adding new product
ctx.Products.Add(new Product { ..., Color = Color.Green});
ctx.SaveChanges();

// retrieving all products
var products = await ctx.Products.ToListAsync();

In both cases we get the expected results. Now let us fetch all products with the color Green.

var products = await ctx.Products
.Where(p => p.Color == Color.Green)
.ToListAsync();

This time the result is correct as well but EF emits a warning: The LINQ expression 'where ([p].Color == __Green_0)' could not be translated and will be evaluated locally.

A look at the SQL statement proves that the filtering is done in memory and not in the database:

SELECT ..., [p].[Color]
FROM [Products] AS [p]

When having just a few products in the database then there is nothing to worry about but with a lot of records this query may lead to major performance issues.

Implement the IRelationalTypeMappingSourcePlugin

The provided value converter is applied to the property Color only so the constant value on the right side of the equation is not being converted. What we need is kind of global value converter, so let's build one.

To provide EF with the value converter for Color we have to implement the interface IRelationalTypeMappingSourcePlugin that has just 1 method FindMapping. First, we prepare the mapping that will be returned by the plugin. The type mapping is derived from RelationalTypeMapping and is providing the CLR type and the converter to the base class.

Actually, the store type nvarchar(50) is not being used in this case so you can even write rumpelstilzchen if you want to.

public class ColorTypeMapping : RelationalTypeMapping
{
private static readonly ValueConverter<Color, string> _converter
= new ValueConverter<Color, string>(color => color.Name,
name => Color.FromName(name));

public ColorTypeMapping()
: base(new RelationalTypeMappingParameters(
new CoreTypeMappingParameters(typeof(Color), _converter),
"nvarchar(50)"))
{
}
}

The plugin itself is just waiting for the method to be called with the CLR type Color to return previously implemented ColorTypeMapping.

public class ColorTypeMappingPlugin : IRelationalTypeMappingSourcePlugin
{
public RelationalTypeMapping FindMapping(in RelationalTypeMappingInfo mappingInfo)
{
if (mappingInfo.ClrType == typeof(Color))
return new ColorTypeMapping();

return null;
}
}

Register the plugin with dependency injection

Now, we have to register the ColorTypeMappingPlugin with the (internal) dependency injection (DI) of EF. For that we have to implement IDbContextOptionsExtension like I did in one of my previous blogs (EF Core: Custom Functions - search for the first occurrence of IDbContextOptionsExtension) or use the nuget package Thinktecture.EntityFrameworkCore.Relational (docs).

By using the nuget package the registration of the plugin looks as following:

services
.AddDbContext<DemoDbContext>(builder => builder
...
.AddRelationalTypeMappingSourcePlugin<ColorTypeMappingPlugin>()

Trying out the plugin

Let's look at the SQL statements when we query for products with the color Green:

// having the value `Color.Green` inline we get the SQL
//    SELECT ..., [p].[Color]
//    FROM [Products] AS [p]
// WHERE [p].[Color] = @__Green_0

var products = await ctx.Products
.Where(p => p.Color == Color.Green)
.ToListAsync();

// but more often the filter comes via parameter
//    SELECT ..., [p].[Color]
// FROM [Products] AS [p]
// WHERE [p].[Color] = @__color_0

var color = Color.Green;
var products = await ctx.Products
.Where(p => p.Color == color)
.ToListAsync();

With less than 20 lines of code we were able to improve the usability of the value conversion so the EF model can be as close as possible to our domain model without loosing performance. It is just a matter of time until EF team removes the limitation for this rather simple use cases. Still, there will be some edge cases in the future that will require some tweaking once again.


Entity Framework Core: Custom Functions (using HasDbFunction)

In the previous post (Custom Functions - using IMethodCallTranslator) we looked at the more flexible approach that requires some boilerplate code. For the usage of a custom function in a project, especially if there are no plans to make the function public then the sacrifice of some (not required) flexibility to reduce the amount of code is fully acceptable.

  1. Entity Framework Core: ROW_NUMBER Support
  2. Entity Framework Core: Custom Functions (using IMethodCallTranslator)
  3. Entity Framework Core: Custom Functions (using HasDbFunction)

As in the previos post we will use the extension method RowVersion with the ORDER BY part only as an example. The actual code can be found on Azure DevOps: Thinktecture.EntityFrameworkCore

Create a static method RowNumber

For this approach the method RowNumber must a static method containing the ORDER BY parameters only, i.e. the method cannot be an extension method for DbFunctions like before.

public static class DbFunctionsExtensions
{
// will throw at runtime because EF tries to translate DbFunctions as well
public static long RowNumber(this DbFunctions _, object orderBy)
{
throw new InvalidOperationException("...");
}

// works as expected
public static long RowNumber(object orderBy)
{
throw new InvalidOperationException("This method is for use with Entity Framework Core only and has no in-memory implementation.");
}
}

Due to new method signature of RowNumber the usage is slightly different as well.

...
.Select(i => new
{
...,
RowNumber = DbFunctionsExtensions.RowNumber(i.ProductId)
})

Introduction of custom function to EF

In the previous approach we had to implement IMethodCallTranslator, IMethodCallTranslatorPlugin and IDbContextOptionsExtension to introduce a new function to EF during the configuration of the (SqlServer)DbContextOptionsBuilder. This time we will skip the IMethodCallTranslator and use the ModelBuilder during OnModelCreating.

The configuration of a new function is encapsulated in an extension method AddRowNumberSupport.

public class DemoDbContext : DbContext
{
...

protected override void OnModelCreating(ModelBuilder modelBuilder)
{
...

modelBuilder.AddRowNumberSupport();
}
}

I was not totally honest with you at the beginning. It is true that we don't need custom implementation of IMethodCallTranslator (SqlServerRowNumberTranslator) but we need some code from it. I will pretend that the SqlServerRowNumberTranslator does not exist and copy the required code to the extension method AddRowNumberSupport

Like before we fetch a MethodInfo of the method RowNumber first.

public static class ModelBuilderExtensions
{
private static readonly MethodInfo _rowNumberMethod
= typeof(DbFunctionsExtensions)
.GetMethod(nameof(DbFunctionsExtensions.RowNumber),
new[] { typeof(object) });

Then we use HasDbFunction and HasTranslation to introduce the function to EF. Inside of HasTranslation we (re)use the RowNumberExpression from the previous post.

   public static ModelBuilder AddRowNumberSupport(this ModelBuilder modelBuilder)
{
modelBuilder.HasDbFunction(_rowNumberMethod)
.HasTranslation(expressions =>
{
var orderByParams = ExtractParams(expressions.First());

return new RowNumberExpression(orderByParams);
});

return modelBuilder;
}

The method ExtractParams is the same as in the SqlServerRowNumberTranslator.

   private static ReadOnlyCollection<Expression> ExtractParams(Expression parameter)
{
if (parameter is ConstantExpression constant
&& constant.Value is IEnumerable<Expression> enumerable)
{
return enumerable.ToList().AsReadOnly();
}

return new List<Expression> { parameter }.AsReadOnly();
}

Summary

This approach is easier to grasp and the difference in the amout of code is not that big to base a decision on that alone. Use the approach you like the best because the change from one approach to another is easy and requires very litte time.