Entity Framework Core: Isolation of Integration Tests

When working with Entity Framework Core (EF) a lot of code can be tested using the In-Memory database provider but sometimes you want (or have) to go to the real database. For example, you are using not just LINQ but custom SQL statements due to performance reasons or you want to check that a specific exception is thrown by the database under some conditions like when having a primary key violation.

The biggest challenge of integration tests is the isolation of one test from another. In this post we will look at 3 options how to do that.

The code for the 3rd option is on GitHub: PawelGerr/Presentation-EntityFrameworkCore

Remarks: in my demos I'm using 3rd party libs: FluentAssertions and xunit.

Given is a DemoRepository with a method AddProduct that we want to test. (The code kept oversimplified for clarity reasons)

public class DemoRepository
{
...

public void AddProduct(Guid id)
{
_dbContext.Products.Add(new Product { Id = id });
_dbContext.SaveChanges();
}
}

Using Transaction Scopes

EF Core added support for TransactionScope in version 2.1.

The isolation of tests via TransactionScope is very simple just wrap the call AddProduct into a TransactionScope to revert all changes at the end of the test. But, there are few preconditions. The testing method must not starting transactions using BeginTransaction() or it has to use a TransactionScope as well.

Also, I recommend to read my other blog post: Entity Framework Core: Use TransactionScope with Caution!

public DemoRepositoryTests()
{
_dbContext = CreateDbContext();
_repository = new DemoRepository(_dbContext);
}
[Fact]
public void Should_add_new_product()
{
var productId = new Guid("DBD9439E-6FFD-4719-93C7-3F7FA64D2220");

using(var scope = new TransactionScope())
{
_repository.AddProduct(productId);

_dbContext.Products.FirstOrDefault(p => p.Id == productId).Should().NotBeNull();

// the transaction is going to be rolled back because the scope is not completed
// scope.Complete();
}
}

Using new Databases

Creating a new database for each test is very easy but the tests are very time consuming. On my machine each test takes about 10 seconds to create and to delete a database on the fly.

The steps of each test are: generate a new database name, create the database by running EF migrations and delete the database in the end.

public class DemoRepositoryTests : IDisposable
{
private readonly DemoDbContext _dbContext;
private readonly DemoRepository _repository;
private readonly string _databaseName;

public DemoRepositoryTests()
{
_databaseName = Guid.NewGuid().ToString();

var options = new DbContextOptionsBuilder<DemoDbContext>()
.UseSqlServer($"Server=(local);Database={_databaseName};...")
.Options;

_dbContext = new DemoDbContext(options);
_dbContext.Database.Migrate();

_repository = new DemoRepository(_dbContext);
}

// Tests come here

public void Dispose()
{
_dbContext.Database.ExecuteSqlCommand((string)$"DROP DATABASE [{_databaseName}]");
}
}

Using different Database Schemas

The 3rd option is to use the same database but different schemas. The creation of a new schema and running EF migrations usually takes less than 50 ms, which is totally acceptable for an integration test. The prerequisites to run queries with different schemas are schema-aware instances of DbContext and schema-aware EF migrations. Read my blog posts for more information about how to change the database schema at runtime:

The class executing integration tests consists of 2 parts: creation of the tables in constructor and the deletion of them in Dispose().

I'm using a generic base class to use the same logic for different types of DbContext.

In the constructor we generate the name of the schema using Guid.NewGuid(), create DbContextOptions using DbSchemaAwareMigrationAssembly and DbSchemaAwareModelCacheKeyFactory described in my previous posts, create the DbContext and run the EF migrations. The database is now fully prepared for executing tests. After execution of the tests the EF migrations are rolled back using IMigrator.Migrate("0"), the EF history table __EFMigrationsHistory is deleted and newly generated schema is dropped.

public abstract class IntegrationTestsBase<T> : IDisposable
where T : DbContext
{
private readonly string _schema;
private readonly string _historyTableName;
private readonly DbContextOptions<T> _options;

protected T DbContext { get; }

protected IntegrationTestsBase()
{
_schema = Guid.NewGuid().ToString("N");
_historyTableName = "__EFMigrationsHistory";

_options = CreateOptions();
DbContext = CreateContext();
DbContext.Database.Migrate();
}

protected abstract T CreateContext(DbContextOptions<T> options,
IDbContextSchema schema);

protected T CreateContext()
{
return CreateContext(_options, new DbContextSchema(_schema));
}

private DbContextOptions<T> CreateOptions()
{
return new DbContextOptionsBuilder<T>()
.UseSqlServer($"Server=(local);Database=Demo;...",
builder => builder.MigrationsHistoryTable(_historyTableName, _schema))
.ReplaceService<IMigrationsAssembly, DbSchemaAwareMigrationAssembly>()
.ReplaceService<IModelCacheKeyFactory, DbSchemaAwareModelCacheKeyFactory>()
.Options;
}

public void Dispose()
{
DbContext.GetService<IMigrator>().Migrate("0");
DbContext.Database.ExecuteSqlCommand(
(string)$"DROP TABLE [{_schema}].[{_historyTableName}]");
DbContext.Database.ExecuteSqlCommand((string)$"DROP SCHEMA [{_schema}]");

DbContext?.Dispose();
}
}

The usage of the base class looks as follows

public class DemoRepositoryTests : IntegrationTestsBase<DemoDbContext>
{
private readonly DemoRepository _repository;

public DemoRepositoryTests()
{
_repository = new DemoRepository(DbContext);
}

protected override DemoDbContext CreateContext(DbContextOptions<DemoDbContext> options,
IDbContextSchema schema)
{
return new DemoDbContext(options, schema);
}

[Fact]
public void Should_add_new_product()
{
var productId = new Guid("DBD9439E-6FFD-4719-93C7-3F7FA64D2220");

_repository.AddProduct(productId);

DbContext.Products.FirstOrDefault(p => p.Id == productId).Should().NotBeNull();
}
}

 

Happy testing! 


Entity Framework Core: Changing DB Migration Schema at Runtime

In the first part of this short blog post series we looked at how to change the database schema of a DbContext, now it is all about changing the schema of the EF Core Migrations at runtime.

The samples are on Github: PawelGerr/Presentation-EntityFrameworkCore

Given is a DemoDbContext implementing our interface IDbContextSchema from the first part of this series.

public interface IDbContextSchema
{
string Schema { get; }
}
public class DemoDbContext : DbContext, IDbContextSchema
{
public string Schema { get; }

public DbSet<Product> Products { get; set; }

...
}

At first we create a migration the usual way: dotnet ef migrations add Initial_Migration

And we get the following:

public partial class Initial_Migration : Migration
{
protected override void Up(MigrationBuilder migrationBuilder)
{
migrationBuilder.CreateTable("Products",
table => new { Id = table.Column<Guid>() },
constraints: table => table.PrimaryKey("PK_Products", x => x.Id));
}

protected override void Down(MigrationBuilder migrationBuilder)
{
migrationBuilder.DropTable("Products");
}
}

Next, we add a constructor to provide the migration with IDbContextSchema and pass the schema to CreateTable and DropTable.

public partial class Initial_Migration : Migration
{
private readonly IDbContextSchema _schema;

public Initial_Migration(IDbContextSchema schema)
{
_schema = schema ?? throw new ArgumentNullException(nameof(schema));
}

protected override void Up(MigrationBuilder migrationBuilder)
{
migrationBuilder.CreateTable("Products",
table => new { Id = table.Column<Guid>() },
constraints: table => table.PrimaryKey("PK_Products", x => x.Id),
schema: _schema.Schema);
}

protected override void Down(MigrationBuilder migrationBuilder)
{
migrationBuilder.DropTable("Products", _schema.Schema);
}
}

If we try to run the migration then we get a MissingMethodException: No parameterless constructor defined for this object. because EF Core needs a parameterless constructor to be able to create an instance of the migration. Luckily, we can adjust the part that is responsible for the creation of new instances. For that we derive from MigrationsAssembly and override the method CreateMigration. In CreateMigration we check if the migration requires an instance of IDbContextSchema and whether the current DbContext is implementing this interface. If so, then we create new instance of the migration by ourselves and return this instance to the caller, otherwise we pass the call to the default implementation.

public class DbSchemaAwareMigrationAssembly : MigrationsAssembly
{
private readonly DbContext _context;

public DbSchemaAwareMigrationAssembly(ICurrentDbContext currentContext,
        IDbContextOptions options, IMigrationsIdGenerator idGenerator,
        IDiagnosticsLogger<DbLoggerCategory.Migrations> logger)
: base(currentContext, options, idGenerator, logger)
{
_context = currentContext.Context;
}

public override Migration CreateMigration(TypeInfo migrationClass,
        string activeProvider)
{
if (activeProvider == null)
throw new ArgumentNullException(nameof(activeProvider));

var hasCtorWithSchema = migrationClass
            .GetConstructor(new[] { typeof(IDbContextSchema) }) != null;

if (hasCtorWithSchema && _context is IDbContextSchema schema)
{
var instance = (Migration)Activator.CreateInstance(migrationClass.AsType(), schema);
instance.ActiveProvider = activeProvider;
return instance;
}

return base.CreateMigration(migrationClass, activeProvider);
}
}

The last step is to register the DbSchemaAwareMigrationAssembly with the dependency injection of EF Core.

Remarks: to change the schema (or the table name) of the migration history table you have to use the method MigrationsHistoryTable

var optionsBuilder = new DbContextOptionsBuilder<DemoDbContext>()
.UseSqlServer("..."
                      // optional
//, b => b.MigrationsHistoryTable("__EFMigrationsHistory", schema)
                          )
.ReplaceService<IModelCacheKeyFactory, DbSchemaAwareModelCacheKeyFactory>()
.ReplaceService<IMigrationsAssembly, DbSchemaAwareMigrationAssembly>();

 

That's all!  


Entity Framework Core: Use TransactionScope with Caution!

One of the new features of Entity Framework Core 2.1 is the support of TransactionScopes. The usage of a TransactionScope is very easy, just put a new instance in a using, write the code inside the block and when you are finished then call Complete() to commit the transaction:

using (var scope = new TransactionScope())
{
var groups = MyDbContext.ProductGroups.ToList();

scope.Complete();
}

But, before changing your code from using BeginTransaction() to TransactionScope you should know some issues caused by them.

The demos are on GitHub: github.com/PawelGerr/Presentation-EntityFrameworkCore

In all examples we will select ProductGroups from a DemoDbContext.

public class DemoDbContext : DbContext
{
public DbSet<ProductGroup> ProductGroups { get; set; }

public DemoDbContext(DbContextOptions<DemoDbContext> options)
: base(options)
{
}
}

public class ProductGroup
{
public Guid Id { get; set; }
public string Name { get; set; }
}

Async methods

EF has for (almost?) every synchronous operation an asynchronous one. So, it is nothing special (even recommended) to use async-await for I/O operations.

In the first example we are using await inside a TransactionScope.

using (var scope = new TransactionScope())
{
var groups = await Context.ProductGroups.ToListAsync().ConfigureAwait(false);
}

Looks harmless but it throws a System.InvalidOperationException: A TransactionScope must be disposed on the same thread that it was created.

The reason is that the TransactionScope doesn't flow from one thread to another by default. To fix that we have to use TransactionScopeAsyncFlowOption.Enabled:

using (var scope = new TransactionScope(TransactionScopeAsyncFlowOption.Enabled))
{
var groups = await Context.ProductGroups.ToListAsync().ConfigureAwait(false);
}

Does it work now? It depends.

If the calls with and without TransactionScopeAsyncFlowOption are using the same database connection and the call without the option is executed first, then we get another exception: System.InvalidOperationException: Connection currently has transaction enlisted. Finish current transaction and retry.

In other words, the first call is the culprit but the second one breaks:

try
{
using (var scope = new TransactionScope())
{ // We know this one - System.InvalidOperationException:
// A TransactionScope must be disposed on the same thread that it was created.
var groups = await Context.ProductGroups.ToListAsync().ConfigureAwait(false);
}
}
catch (Exception e)
{
// error handling
}

using (var scope = new TransactionScope(TransactionScopeAsyncFlowOption.Enabled))
{
// Implemented correctly but throws anyways // System.InvalidOperationException:
// Connection currently has transaction enlisted. Finish current transaction and retry.
var groups = await Context.ProductGroups.ToListAsync().ConfigureAwait(false);
}

Imagine the first call is done in a 3rd party lib or a framework you are using, i.e. you don't know the code - you will be searching for the cause forever, if you haven't seen this error before.

BeginTransaction within TransactionScope

The transaction scopes can be nested. For example, if the outer scope is rolled back then the changes made in the inner scope are reverted as well. The following example works without problems:

using (var scope = new TransactionScope())
{
// some code
Do();
}

public void Do()
{
using (var anotherScope = new TransactionScope())
{
var groups = Context.ProductGroups.ToList();
}
}

Let's try to change the inner scope to BeginTransaction().

using (var scope = new TransactionScope())
{
// some code
Do();
}

public void Do()
{
using (var tx = Context.Database.BeginTransaction())
{
var groups = Context.ProductGroups.ToList();
}
}

The shown use case is not supported, and we get a System.InvalidOperationException: An ambient transaction has been detected. The ambient transaction needs to be completed before beginning a transaction on this connection.

 Yet again, if Do() is part of a 3rd party lib or a framework then this method has be moved out of outer TransactionScope.

Multiple instances of DbContext (or rather DB connections)

Depending on the project we could end up having multiple instances of DbContext. The instances could be of the same or different type and it may be that the other context doesn't even belong to your application but is being used by a framework you are using.

The use case is the following, we are having a TransactionScope with 2 database accesses using different database connections.

using (var scope = new TransactionScope())
{
var groups = Context.ProductGroups.ToList();
var others = AnotherCtx.SomeEntities.ToList();
}

This use case is not supported as well because a distributed transaction coordinator is required and there is none besides on Windows, so EF team has dropped the support altogether. The exception we get on Windows and Linux is System.PlatformNotSupportedException: This platform does not support distributed transactions.

Conclusion 

The issues mentioned in this blog post are neither new nor specific to Entity Framework Core. I recommend putting some research into this matter before deciding to use or not to use transaction scopes.


Entity Framework Core: Changing Database Schema at Runtime

At the moment there is no built-in support for changing the database schema at runtime. Luckily, Entity Framework Core (EF) provides us with the right tools to implement it by ourselves.

The demos are on GitHub: github.com/PawelGerr/Presentation-EntityFrameworkCore

Given are a database context DemoDbContext and an entity Product.

public class DemoDbContext : DbContext
{
public DbSet<Product> Products { get; set; }

public DemoDbContext (DbContextOptions<DemoDbContext> options)
: base(options)
{
}
}
public class Product
{
public Guid Id { get; set; }
}

There are 2 ways to change the schema, either by applying the TableAttribute or by implementing the interface IEntityTypeConfiguration<TEntity>.

The first option won't help us because the schema is hard-coded.

[Table("Products", Schema = "demo")]
public class Product
{
public Guid Id { get; set; }
}

The second option gives us the ability to provide the schema from DbContext to the EF model configuration. At first we implement the entity configuration for Product.

public class ProductEntityConfiguration : IEntityTypeConfiguration<Product>
{
private readonly string _schema;

public ProductEntityConfiguration(string schema)
{
_schema = schema;
}

public void Configure(EntityTypeBuilder<Product> builder)
{
if (!String.IsNullOrWhiteSpace(_schema))
builder.ToTable(nameof(DemoDbContext.Products), _schema);

builder.HasKey(product => product.Id);
}
}

Now we use the entity configuration in OnModelCreating and pass the schema to it via constructor. Additionally, we create the interface IDbContextSchema containing just the schema (i.e. a string) to be able to inject it into DemoDbContext.

public interface IDbContextSchema
{
string Schema { get; }
}
// DbContext implements IDbContextSchema as well
// so we know it is "schema-aware"
public class DemoDbContext : DbContext, IDbContextSchema
{
public string Schema { get; }

public DbSet<Product> Products { get; set; }

public DemoDbContext(DbContextOptions<DemoDbContext> options,
IDbContextSchema schema = null)
: base(options)
{
Schema = schema?.Schema;
}

protected override void OnModelCreating(ModelBuilder modelBuilder)
{
base.OnModelCreating(modelBuilder);

modelBuilder.ApplyConfiguration(new ProductEntityConfiguration(Schema));
}
}

We are almost done. The last task is to change how EF is caching database model definitions. By default just the type of the DbContext is used but we need to differentiate the models not just by type but by the schema as well. For that we implement the interface IModelCacheKeyFactory.

public class DbSchemaAwareModelCacheKeyFactory : IModelCacheKeyFactory
{
public object Create(DbContext context)
{
return new {
        Type = context.GetType(),
        Schema = context is IDbContextSchema schema ? schema.Schema : null
    };
}
}

No we have to replace the default implementation with ours and to register the IDbContextSchema. In current example the IDbContextSchema is just a singleton but it can be provided by anything we want like read from a database or derived from a JWT bearer token during an HTTP request, etc.

IServiceCollection services = ...;

services
.AddDbContext<DemoDbContext>(
     builder => builder.UseSqlServer("...")
                  .ReplaceService<IModelCacheKeyFactory, DbSchemaAwareModelCacheKeyFactory>())
  .AddSingleton<IDbContextSchema>(new DbContextSchema("demo"));

--------------------------------------------

// just a helper class public class DbContextSchema : IDbContextSchema
{
public string Schema { get; }

public DbContextSchema(string schema)
{
Schema = schema ?? throw new ArgumentNullException(nameof(schema));
}
}

 

Voila! 

 

PS: There is one special use case for that feature - isolation of integration tests due to missing support of ambient transactions. For that we need schema-aware migrations we will look at in the next blog post.

Stay tuned!

 


Entity Framework Core: Inheritance - Table-per-Type (TPT) is not supported, is it? (Part 2 - Database First)

In the previous post we have created 2 Entity Framework Core (EF Core) models with a code first approach. One model was using the Table-per-Hierarchy (TPH) pattern and the other one Table-per-Type (TPT). In this post we want to approach a more common scenario we see in customer projects: we are using the database first approach now.

All demos are on Github.

Business data model

The business data model is the same as in the previous post. We have 3 DTOs: Person, Customer and Employee.

public class PersonDto
{
    public Guid Id { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
}

public class CustomerDto : PersonDto
{
    public DateTime DateOfBirth { get; set; }
}

public class EmployeeDto : PersonDto
{
    public decimal Turnover { get; set; }
}

Table-per-Hierarchy (TPH)

We start with the Table-per-Hierarchy pattern. Given is a table People containing all columns from all DTOs incl. 1 column Discriminator to be able to distinguish the customers from employees.

Remark: we are using nvarchar(max) for the sake of simplicity.

TABLE People
(
    Id uniqueidentifier NOT NULL PRIMARY KEY,
    FirstName nvarchar(max) NULL,
    LastName nvarchar(max) NULL,
    DateOfBirth datetime2(7) NULL,
    Turnover decimal(18, 2) NULL,
    Discriminator nvarchar(max) NOT NULL
)

With the following command we let EF Core scaffold the entities (or rather the entity) and the database context:

dotnet ef dbcontext scaffold "Server=(local);Database=TphDemo;Trusted_Connection=True" Microsoft.EntityFrameworkCore.SqlServer -f -c ScaffoldedTphDbContext --context-dir ./TphModel/DatabaseFirst -o ./TphModel/DatabaseFirst -p ./../../EntityFramework.Demo.csproj -s ./../../EntityFramework.Demo.csproj

The result is not the one we might have expected but is pretty reasonable. The scaffolding creates just 1 entity People with all fields in it because there is no way for EF Core to guess that the table contains 3 entities and not just 1.

public class People
{
public Guid Id { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public DateTime? DateOfBirth { get; set; }
public decimal? Turnover { get; set; }
public string Discriminator { get; set; }
}

First, let's fix the name of the entity because the name should be Person not People.

For that we create a class that does the pluralization/singularization and register it with the so-called IDesignTimeServices. The implementation of IDesignTimeServices doesn't need any kind of registration, EF Core will find it automatically. The actual pluralization/singularization will be done by the 3rd party-library Inflector.

public class Pluralizer : IPluralizer
{
    public string Pluralize(string identifier)
    {
        // Inflector needs some help with "People" otherwise we get "Peoples"
        if (identifier == "People")
            return identifier;

        return Inflector.Inflector.Pluralize(identifier);
}

    public string Singularize(string identifier)
    {
        return Inflector.Inflector.Singularize(identifier);
    }
}

public class DesignTimeServices : IDesignTimeServices
{
    public void ConfigureDesignTimeServices(IServiceCollection services)
    {
        services.AddSingleton<IPluralizer, Pluralizer>();
    }
}

Now, the generated entity gets the name Person - but to make the model right we have to split the class in 3, manually. After manual adjustments we have 2 options: switch to code first approach or adjust the classes manually after every scaffolding to apply the changes from database. The adjusted code is virtually identical to the one of code first approach but this time the Descriminator is defined explicitly.

Remark: I've renamed Person to PersonTph so the names are the same as in the previous blog post.

public class PersonTph
{
public Guid Id { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public string Discriminator { get; set; }
}
public class CustomerTph : PersonTph
{
public DateTime DateOfBirth { get; set; }
}
public class EmployeeTph : PersonTph
{
public decimal Turnover { get; set; }
}

The generated database context needs some adjustments as well because DbSets for customers and employees are missing and the field Discriminator has to be defined as one.

public partial class ScaffoldedTphDbContext : DbContext
{
    public virtual DbSet<Person> People { get; set; }

    public ScaffoldedTphDbContext(DbContextOptions<ScaffoldedTphDbContext> options)
        : base(options)
    {
    }

    protected override void OnModelCreating(ModelBuilder modelBuilder)
    {
        modelBuilder.Entity<Person>(entity =>
{
entity.Property(e => e.Id).ValueGeneratedNever();
entity.Property(e => e.Discriminator).IsRequired();
});
    }
}

As with the entities, the only change - compared to code first approach - is the explicit definition of the Discriminator.

public class ScaffoldedTphDbContext : DbContext
{
public virtual DbSet<PersonTph> People { get; set; }
public virtual DbSet<CustomerTph> Customers { get; set; }
public virtual DbSet<EmployeeTph> Employees { get; set; }

public ScaffoldedTphDbContext(DbContextOptions<ScaffoldedTphDbContext> options)
: base(options)
{
}

protected override void OnModelCreating(ModelBuilder modelBuilder)
{
modelBuilder.Entity<PersonTph>(entity => entity.Property(e => e.Id).ValueGeneratedNever());
modelBuilder.Entity<PersonTph>()
.HasDiscriminator(person => person.Discriminator)
.HasValue<PersonTph>(nameof(PersonTph))
.HasValue<CustomerTph>(nameof(CustomerTph))
.HasValue<EmployeeTph>(nameof(EmployeeTph));
}
}

Table-per-Type (TPT)

Having a database using the TPT pattern we start off 3 with tables:

TABLE People
(
    Id uniqueidentifier NOT NULL PRIMARY KEY,
    FirstName nvarchar(max) NULL,
    LastName nvarchar(max) NULL
)
TABLE Customers
(
    Id uniqueidentifier NOT NULL
        PRIMARY KEY
        FOREIGN KEY REFERENCES People (Id),
    DateOfBirth datetime2(7) NOT NULL
)
TABLE Employees
(
    Id uniqueidentifier NOT NULL
        PRIMARY KEY
        FOREIGN KEY REFERENCES People (Id),
    Turnover [decimal](18, 2) NOT NULL
)

With the following command we create the entities and the database context:

dotnet ef dbcontext scaffold "Server=(local);Database=TptDemo;Trusted_Connection=True" Microsoft.EntityFrameworkCore.SqlServer -f -c ScaffoldedTptDbContext --context-dir ./TptModel/DatabaseFirst -o ./TptModel/DatabaseFirst -p ./../../EntityFramework.Demo.csproj -s ./../../EntityFramework.Demo.csproj

The scaffolder generates 3 entities that are almost correct. The only flaw is the name of the navigational property IdNavigation pointing to the base class Person.

public partial class Person
{
    public Guid Id { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public Customer Customer { get; set; }
    public Employee Employee { get; set; }
}
public partial class Employee
{
    public Guid Id { get; set; }
    public decimal Turnover { get; set; }
    public Person IdNavigation { get; set; }
}
public partial class Customer
{
    public Guid Id { get; set; }
    public DateTime DateOfBirth { get; set; }
    public Person IdNavigation { get; set; }
}

Luckily, this issue is very easy to fix by implementing ICandidateNamingService and registering it with IDesignTimeServices.

public class CustomCandidateNamingService : CandidateNamingService
{
    public override string GetDependentEndCandidateNavigationPropertyName(IForeignKey foreignKey)
    {
     if(foreignKey.PrincipalKey.IsPrimaryKey())
            return foreignKey.PrincipalEntityType.ShortName();

        return base.GetDependentEndCandidateNavigationPropertyName(foreignKey);
}
}

public class DesignTimeServices : IDesignTimeServices
{
    public void ConfigureDesignTimeServices(IServiceCollection services)
    {
        services.AddSingleton<IPluralizer, Pluralizer>()
            .AddSingleton<ICandidateNamingService, CustomCandidateNamingService>();
    }
}

After re-running the scaffolder, we get the expected results:

public class Customer
{
public Guid Id { get; set; }
public DateTime DateOfBirth { get; set; }

public Person Person { get; set; }
}
public partial class Employee
{
public Guid Id { get; set; }
public decimal Turnover { get; set; }

public Person Person { get; set; }
}

The last part is the database context. Fortunately, we don't have to change anything.

public partial class ScaffoldedTptDbContext : DbContext
{
public virtual DbSet<Customer> Customers { get; set; }
public virtual DbSet<Employee> Employees { get; set; }
public virtual DbSet<Person> People { get; set; }

public ScaffoldedTptDbContext(DbContextOptions<ScaffoldedTptDbContext> options)
: base(options)
{
}

protected override void OnModelCreating(ModelBuilder modelBuilder)
{
modelBuilder.Entity<Customer>(entity =>
{
entity.Property(e => e.Id).ValueGeneratedNever();

entity.HasOne(d => d.Person)
.WithOne(p => p.Customer)
.HasForeignKey<Customer>(d => d.Id);
});

modelBuilder.Entity<Employee>(entity =>
{
entity.Property(e => e.Id).ValueGeneratedNever();

entity.HasOne(d => d.Person)
.WithOne(p => p.Employee)
.HasForeignKey<Employee>(d => d.Id);
});

modelBuilder.Entity<Person>(entity =>
                        {
                         entity.Property(e => e.Id).ValueGeneratedNever());
                        });
}
}

With TPT we can but don't have to switch to code first approach because we can regenerate the entities and the database context at any time.

Conclusion

Database first approach works best with TPT, with TPH not so much because a relational database knows nothing about any inheritance. With TPT there is just one minor issue but thanks to the great job of the Entity Framework team we can adjust the code generation as we want without the need to copy all the code of Entity Framework Core.


Entity Framework Core: Inheritance - Table-per-Type (TPT) is not supported, is it? (Part 1 - Code First)

With O/R mappers there are a few patterns how a class hierarchy can be mapped to a relational database. The most popular ones are the Table-Per-Hierarchy (TPH) and the Table-Per-Type (TPT) patterns. The Entity Framework Core 2.x (EF Core) officially supports the Table-per-Hierarchy pattern only. The support of Table-per-Type is in the backlog of the Entity Framework team, i.e. it is not (officially) supported yet. Nevertheless, you can use TPT with the current version of EF Core. The usability is not ideal but acceptable. Especially, if you have an existing database using TPT then this short blog post series may give you an idea how to migrate to EF Core.

In the 1st part we will set up 2 EF Core models incl. database migrations for TPH and TPT using code first approach. In the 2nd part we are going to use the database first approach.

Remarks: this blog post is not about what approach is the best for your solution :)

All demos are on Github.

Business data model

In both cases we are going to use the following business data model. For our outward-facing interface, we are using DTOs. We have a PersonDto with 3 fields and 2 derived classes CustomerDto and EmployeeDto, both having 1 additional field.

public class PersonDto
{
public Guid Id { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
}
public class CustomerDto : PersonDto
{
  public DateTime DateOfBirth { get; set; }
}

public class EmployeeDto : PersonDto
{
  public decimal Turnover { get; set; }
}

Table-Per-Hierarchy (TPH)

Now, let's look at the solution to have internal entities based on TPH. At first, we need to define the entity classes. Thanks to the native support of TPH and the very simple data model the entities are identical to the DTOs.

public class PersonTph
{
public Guid Id { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
}
public class CustomerTph : PersonTph
{
public DateTime DateOfBirth { get; set; }
}
public class EmployeeTph : PersonTph
{
public decimal Turnover { get; set; }
}

We can implement the database context to be able to access customers and employees like this:

public class TphDbContext : DbContext
{
public DbSet<PersonTph> People { get; set; }
public DbSet<CustomerTph> Customers { get; set; }
public DbSet<EmployeeTph> Employees { get; set; }

public TphDbContext(DbContextOptions<TphDbContext> options)
: base(options)
{
}
}

And for the sake of completion we will be using Entity Framework Core Migrations to create and update the database schema. For that we execute the following command:

dotnet ef migrations add Initial_TPH_Migration -p ./../../EntityFramework.Demo.csproj -s ./../../EntityFramework.Demo.csproj -c TphDbContext -o ./TphModel/CodeFirst/Migrations

As expected we have 1 table with all fields from person, customer and employee and 1 additional column Descriminator, so EF Core is able to differentiate customers from employees.

public partial class Initial_TPH_Migration : Migration
{
protected override void Up(MigrationBuilder migrationBuilder)
{
migrationBuilder.CreateTable("People",
table => new
{
Id = table.Column<Guid>(nullable: false),
FirstName = table.Column<string>(nullable: true),
LastName = table.Column<string>(nullable: true),
DateOfBirth = table.Column<DateTime>(nullable: true),
Turnover = table.Column<decimal>(nullable: true),
Discriminator = table.Column<string>(nullable: false)
},
constraints: table => table.PrimaryKey("PK_People", x => x.Id));
}

protected override void Down(MigrationBuilder migrationBuilder)
{
migrationBuilder.DropTable("People");
}
}

The usage of TPH is nothing special, we just use the appropriate property on the TphDbContext.

TphDbContext ctx = ...

// Create a customer
ctx.Customers.Add(new CustomerTph()
{
Id = Guid.NewGuid(),
FirstName = "John",
LastName = "Foo",
DateOfBirth = new DateTime(1980, 1, 1)
});

// Fetch all customers
var customers = ctx.Customers
    .Select(c => new CustomerDto()
    {
         Id = c.Id,
         FirstName = c.FirstName,
         LastName = c.LastName,
         DateOfBirth = c.DateOfBirth
     })
    .ToList();

Table-Per-Type (TPT) 

Ok, that was easy. Now, how can a solution for TPT look like? With the absence of native support for TPT the entities do not derive from each other but reference each other. The field Id of customer and employee is the primary key and a foreign key pointing to person. The structure of the entities is very similar to the database schema of the TPT pattern.

public class PersonTpt
{
public Guid Id { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
}
public class CustomerTpt
{
[ForeignKey(nameof(Person))]
public Guid Id { get; set; } // PK and FK pointing to PersonTpt
public PersonTpt Person { get; set; }

public DateTime DateOfBirth { get; set; }
}
public class EmployeeTpt
{
[ForeignKey(nameof(Person))]
public Guid Id { get; set; } // PK and FK pointing to PersonTpt
public PersonTpt Person { get; set; }

public decimal Turnover { get; set; }
}

The database context of TPT is identical to the one of TPH.

public class TptDbContext : DbContext
{
public DbSet<PersonTpt> People { get; set; }
public DbSet<CustomerTpt> Customers { get; set; }
public DbSet<EmployeeTpt> Employees { get; set; }

public TptDbContext(DbContextOptions<TptDbContext> options)
: base(options)
{
}
}

Next, we will create an EF Core migration with the following command

dotnet ef migrations add Initial_TPT_Migration -p ./../../EntityFramework.Demo.csproj -s ./../../EntityFramework.Demo.csproj -c TptDbContext -o ./TptModel/CodeFirst/Migrations

The migration creates 3 tables with correct columns, primary keys and foreign keys.

public partial class Initial_TPT_Migration : Migration
{
protected override void Up(MigrationBuilder migrationBuilder)
{
migrationBuilder.CreateTable("People",
                        table => new
{
Id = table.Column<Guid>(nullable: false),
FirstName = table.Column<string>(nullable: true),
LastName = table.Column<string>(nullable: true)
},
constraints: table => table.PrimaryKey("PK_People", x => x.Id));

migrationBuilder.CreateTable("Customers",
table => new
{
Id = table.Column<Guid>(nullable: false),
DateOfBirth = table.Column<DateTime>(nullable: false)
},
constraints: table =>
{
table.PrimaryKey("PK_Customers", x => x.Id);
table.ForeignKey("FK_Customers_People_Id",
                                                    x => x.Id,
                                                    "People",
                                                    "Id",
                                                    onDelete: ReferentialAction.Cascade);
});

migrationBuilder.CreateTable("Employees",
table => new
{
Id = table.Column<Guid>(nullable: false),
Turnover = table.Column<decimal>(nullable: false)
},
constraints: table =>
{
table.PrimaryKey("PK_Employees", x => x.Id);
table.ForeignKey("FK_Employees_People_Id",
                                                    x => x.Id,
                                                    "People",
                                                    "Id",
                                                    onDelete: ReferentialAction.Cascade);
});
}

protected override void Down(MigrationBuilder migrationBuilder)
{
migrationBuilder.DropTable("Customers");
migrationBuilder.DropTable("Employees");
migrationBuilder.DropTable("People");
}
}

The biggest difference - compared to TPH - is in the usage of the entities. To get to the fields of the person (i.e. the base type) we have to use the navigational property Person. This may seem cumbersome at first, but it is not a hindrance in practice.

TptDbContext ctx = ...

// Fetch all customers
var customers = ctx.Customers
.Select(c => new CustomerDto()
{
Id = c.Id,
FirstName = c.Person.FirstName,
LastName = c.Person.LastName,
DateOfBirth = c.DateOfBirth
})
.ToList();

// Create a customer
ctx.Customers.Add(new CustomerTpt()
        {
        Person = new PersonTpt()
           {
             Id = Guid.NewGuid(),
             FirstName = "John",
             LastName = "Foo"
             },
DateOfBirth = new DateTime(1980, 1, 1)
});

 Voila!

Conclusion

With Entity Framework Core we can use both the Table-Per-Hierarchy and Table-Per-Type patterns. At least with a code first approach. Whether and how the patterns are applicable using the database first approach we will see in the next blog post.

Stay tuned.


Entity Framework Core 2.1 Performance: Beware of N+1 Queries (Revisited)

In the previous post we have identified some Entity Framework (EF) LINQ queries that are affected by so called N+1 queries problem. In the meantime a new version (2.1-RC1) of Entity Framework has been released so we check the SQL statement generation yet another time.

Samples: Github-Repo 

Positive thing(s) first...

In the previous version the selection of a filtered collection was affected by the problem - with and without ToList() but not anymore

var groups = Context.ProductGroups
.Where(g => g.Name.Contains("Group"))
.Select(g => new
{
ProductGroup = g,
Products = g.Products.Where(p => p.Name.Contains("1")).ToList()
})
.ToList();

Adding ToList() leads to 2 SQL statements instead of N+1 where N is the number of selected product groups.

1 query for fetching of the product groups:

SELECT
    [g].[Id], [g].[Name]
FROM
    [ProductGroups] AS [g]
WHERE
    CHARINDEX(N'Group', [g].[Name]) > 0

And 1 query for fetching of the products:

SELECT
    [g.Products].[Id], [g.Products].[GroupId], [g.Products].[Name], [t].[Id]
FROM
    [Products] AS [g.Products]
    INNER JOIN
    (
        SELECT
            [g0].[Id]
        FROM
            [ProductGroups] AS[g0]
        WHERE
            CHARINDEX(N'Group', [g0].[Name]) > 0
    ) AS [t]
        ON [g.Products].[GroupId] = [t].[Id]
WHERE
    CHARINDEX(N'1', [g.Products].[Name]) > 0
ORDER BY
    [t].[Id]

Alas, the usage of FirstOrDefault() is still producing N+1 queries

var groups = Context.ProductGroups
.Where(g => g.Name.Contains("Group"))
.Select(g => new
{
ProductGroup = g,
Product = g.Products.FirstOrDefault()
})
.ToList();

and at the moment GroupBy() is not as powerful as in EF 6 so the following query fetches the whole table instead of the first product for each product group.

var firstProducts = Context.Products
.GroupBy(p => p.GroupId)
.Select(g => g.FirstOrDefault())
.ToList();

The corresponding SQL statement is:

SELECT
    [p].[Id], [p].[GroupId], [p].[Name]
FROM
    [Products] AS [p]
ORDER BY
    [p].[GroupId]

 

There is a lot of work to do but we are getting there... until then keep using your favorite profiling tool.


Entity Framework Core Performance: Beware of N+1 Queries

After working with Entity Framework 6 (EF 6) for several years, a software developer can predict the SQL statements being generated by EF just by looking at the LINQ queries. With Entity Framework Core (EF Core) the SQL statement generation has changed - in some cases for the better, in others for the worse.

In this blog post we will check a few LINQ queries and see which of them are executing N+1 SQL statements where N is the number of selected records.

Given is a DbContext with 2 entities Product and ProductGroup. (Repo with sample code: github.com/PawelGerr/Presentation-EntityFrameworkCore)

public class DemoDbContext : DbContext
{
public DbSet<Product> Products { get; set; }
public DbSet<ProductGroup> ProductGroups { get; set; }
}
public class Product
{
public Guid Id { get; set; }
public string Name { get; set; }

public Guid GroupId { get; set; }
public ProductGroup Group { get; set; }
}
public class ProductGroup
{
public Guid Id { get; set; }
public string Name { get; set; }

public ICollection<Product> Products { get; set; }
}

Let's print out all product groups having the word "Group" in their names with corresponding products via Include() first and using Select() second.

// Using Include()
var groups = Context.ProductGroups
.Include(g => g.Products)
.Where(g => g.Name.Contains("Group"))
.ToList();

Print(groups);
// Using Select() var groups = Context.ProductGroups
.Where(g => g.Name.Contains("Group"))
.Select(g => new
{
ProductGroup = g,
g.Products
})
.ToList();

Print(groups);

In both cases 2 SQL statements are executed by EF Core: 1 for the product groups and 1 for the products. On the contrary, EF  6 executes just 1 statement. This may imply that the performance of EF 6 is better than the of EF Core, but in practice it is worse because the queries are getting huge and produce more load on the database.

-- Fetching product groups
SELECT [g].[Id], [g].[Name]
FROM [ProductGroups] AS [g]
WHERE CHARINDEX(N'Group', [g].[Name]) > 0
ORDER BY [g].[Id]
-- Fetching products
SELECT [g.Products].[Id], [g.Products].[GroupId], [g.Products].[Name]
FROM [Products] AS [g.Products]
INNER JOIN
(
    SELECT [g0].[Id]
    FROM [ProductGroups] AS [g0]
    WHERE CHARINDEX(N'Group', [g0].[Name]) > 0
) AS [t] ON [ g.Products].[GroupId] = [t].[Id]
ORDER BY [t].[Id]

Now we don't take all products but only those with the term "1"  in their names and print them out twice(!). 

var groups = Context.ProductGroups
.Where(g => g.Name.Contains("Group"))
.Select(g => new
{
ProductGroup = g,
Products = g.Products.Where(p => p.Name.Contains("1"))
})
.ToList();

Print(groups); // 1st iteration over product groups
Print(groups); // 2nd iteration over product groups

The result is disappointing. Having 5 product groups matching the condition we get 11 SQL statement executions: 1 query for fetching 5 product groups and (2 * 5=10) for fetching the products. Let's put a ToList() at the end of the products query.

var groups = Context.ProductGroups
.Where(g => g.Name.Contains("Group"))
.Select(g => new
{
ProductGroup = g,
Products = g.Products.Where(p => p.Name.Contains("1")).ToList()
})
.ToList();

Now we have 6 (=1+5)  queries being sent to the database, it is getting better but still not satisfying.

-- 1 query for fetching product groups
SELECT [g].[Id], [g].[Name]
FROM [ProductGroups] AS [g]
WHERE CHARINDEX(N'Group', [g].[Name]) > 0
-- 5 queries for fetching products (i.e. 1 query per fetched product group)
SELECT [p].[Id], [p].[GroupId], [p].[Name]
FROM [Products] AS [p]
WHERE (CHARINDEX(N''1'', [p].[Name]) > 0) AND
(@_outer_Id = [p].[GroupId])

Obviously, EF Core has some difficulties translating queries if Select() contains a filtered collection. Let's select just the first product.

var groups = Context.ProductGroups
.Where(g => g.Name.Contains("Group"))
.Select(g => new
{
ProductGroup = g,
Product = g.Products.FirstOrDefault()
})
.ToList();

Print(groups);

We still getting 6 queries meaning that the "problem" doesn't lie in the cardinality of the response type (Product vs ICollection<Product>) but in collections in general.

Solutions

We can reduce the number of queries by not using the navigational property Products but doing the "JOIN" by ourselves, for example via GroupJoin.

var productsQuery = Context.Products.Where(i => i.Name.Contains("1"));

var groups = Context.ProductGroups
.Where(g => g.Name.Contains("Group"))
.GroupJoin(productsQuery, g => g.Id, p => p.GroupId, (g, p) => new
{
ProductGroup = g,
Products = p
})
.ToList();

Print(groups);

The previous LINQ query produces just 1 query.

SELECT [g].[Id], [g].[Name], [t].[Id], [t].[GroupId], [t].[Name]
FROM [ProductGroups] AS [g]
LEFT JOIN
(
    SELECT [i].[Id], [i].[GroupId], [i].[Name]
    FROM [Products] AS [i]
    WHERE CHARINDEX(N'1', [i].[Name]) > 0
) AS [t] ON [g].[Id] = [t].[GroupId]
WHERE CHARINDEX(N'Group', [g].[Name]) > 0
ORDER BY [g].[Id]

An alternative solution is to fetch the data separately and doing lookups in .NET.

var groupsQuery = Context.ProductGroups
.Where(g => g.Name.Contains("Group"));

var productsByGroupId = groupsQuery
                           .SelectMany(g => g.Products.Where(i => i.Name.Contains("1")))
.ToLookup(p => p.GroupId);

var groups = groupsQuery
.Select(g => new
{
ProductGroup = g,
Products = productsByGroupId[g.Id]
})
.ToList();

The generated SQL statements are easier to handle by the database but there are 2 of them. 

-- For product groups
SELECT [g].[Id], [g].[Name]
FROM [ProductGroups] AS [g]
WHERE CHARINDEX(N'Group', [g].[Name]) > 0
-- For products
SELECT [g.Products].[Id], [g.Products].[GroupId], [g.Products].[Name]
FROM [ProductGroups] AS [g]
INNER JOIN [Products] AS [g.Products]
    ON [g].[Id] = [g.Products].[GroupId]
WHERE (CHARINDEX(N'Group', [g].[Name]) > 0) AND
     (CHARINDEX(N'1', [g.Products].[Name]) > 0)

Depending on the database model, amount of data, indexes, number of collections and columns being fetched the one or the other solution may perform better.

Closing Words

The query generation of EF Core is not optimal yet but the Entity Framework Team is currently working on the "N+1 queries" problem so we will re-check all queries with EF Core 2.1 very soon.

In general, whether it is EF 6, EF Core or other O/R mapper it is recommended to use a database profiling tool, so we get good understanding of the technology we use yet again.

 


(ASP).NET Core in production: Changing log level temporarily - 2nd approach

In the previous blog post I talked about how to change the log level at runtime by coupling the appsettings.json (or rather the IConfiguration) with the ILogger. However, the solution has one drawback: you need to change the file appsettings.json for that. In this post we will be able to change the log level without changing the configuration file.

Want to see some real code? Look at the examples on https://github.com/PawelGerr/Thinktecture.Logging.Configuration 

or just use Nuget packages: Thinktecture.Extensions.Logging.Configuration and Thinktecture.Extensions.Serilog.Configuration 

At first we need a custom implementation of IConfigurationSource and IConfigurationProvider. The actual work does the implementation of IConfigurationProvider. The IConfigurationSource is just to inject the provider into your ConfigurationBuilder.

var config = new ConfigurationBuilder()
    .Add(new LoggingConfigurationSource())
    .Build();
----------------------------------------- public class LoggingConfigurationSource : IConfigurationSource
{
    public IConfigurationProvider Build(IConfigurationBuilder builder)
    {
        // Our implementation of IConfigurationProvider
        return new LoggingConfigurationProvider();
    }
}

As we can see, the LoggingConfigurationSource doesn't do pretty much, let us focus on LoggingConfigurationProvider or rather on the interface IConfigurationProvider.

public interface IConfigurationProvider
{
    bool TryGet(string key, out string value);
    void Set(string key, string value);
    IChangeToken GetReloadToken();
    void Load();
    IEnumerable<string> GetChildKeys(IEnumerable<string> earlierKeys, string parentPath);
}

There are 2 methods that look promising: Set(key, value) for setting a value for a specific key and GetReloadToken() to notify other components (like the logger) about changes in the configuration. Now that we know how to change the configuration values, we need to know the keys and values the logger uses to configure itself. Use Microsoft docs for a hint for Microsoft.Extensions.Logging.ILogger or Serilog.Settings.Configuration in case you are using Serilog.

The pattern for MS-logger key is <<Provider>>:LogLevel:<<Category>>. Here are some examples for the logs coming from Thinktecture components: Console:LogLevel:Thinktecture or LogLevel:Thinktecture.  The value is just one of the Microsoft.Extensions.Logging.LogLevel, like Debug.

namespace Thinktecture
{
    public class MyComponent
    {
        public MyComponent(ILogger<MyComponent> logger)
        {
            logger.LogDebug("Log from Thinktecture.Component");
        }
    }
}

Let's look at the implementation, luckily there is a base class we can use.

public class LoggingConfigurationProvider : ConfigurationProvider
{
    public void SetLevel(LogLevel level, string category = null, string provider = null)
    {
        // returns something like "Console:LogLevel:Thinktecture"
        var path = BuildLogLevelPath(category, provider);
        var levelName = GetLevelName(level); // returns log level like "Debug"

        // Data and OnReload() are provided by the base class
        Data[path] = levelName;
        OnReload(); // notifies other components
    }

    ...
}

Actually, that's it ... You can change the configuration just by setting and deleting keys in the dictionary Data and calling OnReload() afterwards. The only part that's left is to get hold of the instance of LoggingConfigurationProvider to be able to call the method SetLevel from outside but I'm pretty sure you don't need any help for that especially having access to my github repo :)

 

The provided solution does what we intended to, but, do we really want that simple filtering of the logs? Image you are using Entity Framework Core (EF) and there are multiple requests that modify some data. One request is able to commit the transaction the other doesn't and throws, say, an OptimisticConcurrencyException. Your code catches the exception and handles it by retrying the whole transaction, with success. Entity Framework logs this error (i.e. the SQL statement, arguments etc.) internally. The question is, should this error be logged by EF as an Error even if it has been handled by our application? If yes then our logs will be full with errors and it would seem as if we have a lot of bugs in our application. Perhaps it would be better to let EF to log its internal errors as Debug, so that this information is not lost and if our app can't handle the exception then we will log the exception as an error.

But that's for another day ...


.NET Core in production: Changing log level temporarily

When running the application in production then the log level is set somewhere between Information and Error. The question is what to do if you or your customer experiences some undesired behavior and the logs with present log level aren't enough to pinpoint the issue.

The first solution that comes to mind is to try to reproduce the issue on a developer's machine with lower log level like Debug. It may be enough to localize the bug but sometimes it isn't. Even if you are allowed to restart the app in production with lower log level, the issue may go away ... temporarily, i.e. the app still has a bug.

Better solution is to change the log level temporarily without restarting the app.

First step is to initialize the logger with the IConfiguration. That way the logger changes the level as soon as you change the corresponding configuration (file).

In this post I will provide 2 examples, one that is using the ILoggingBuilder of the ASP.NET Core and the other example is using Serilog because it is not tightly coupled to ASP.NET Core (but works very well with it!). 

Using ILoggingBuilder:

// the content of appsettings.json
{
    "Logging": {
        "LogLevel": { "Default": "Information" }
    }
}
-----------------------------------------
var config = new ConfigurationBuilder().
    AddJsonFile("appsettings.json", false, true) // reloadOnChange=true
    .Build();
// Setup of ASP.NET Core application WebHostWebHost
    .CreateDefaultBuilder()
    .ConfigureLogging(builder =>
    {
        builder.AddConfiguration(config); // <= init logger
        builder.AddConsole();
    })
    ...

Using Serilog:

// the content of appsettings.json
{
    "Serilog": {
        "MinimumLevel": { "Default": "Information" }
    }
}
-----------------------------------------
var config = new ConfigurationBuilder()
    .AddJsonFile("appsettings.json", false, true) // reloadOnChange=true
    .Build();
var serilogConfig = new LoggerConfiguration()
    .ReadFrom.Configuration(config) // <= init logger
    .WriteTo.Console();

In case you are interested in integration with (ASP).NET Core 

// If having a WebHost
WebHost
    .CreateDefaultBuilder()
    .ConfigureLogging(builder =>
    {
        builder.AddSerilog(serilogConfig.CreateLogger());
    })
    ...;

// If there is no WebHost
var loggerFactory = new LoggerFactory()
    .AddSerilog(serilogConfig.CreateLogger());

At this point the loggers are coupled to IConfiguration or rather to appsettings.json, i.e if you change the level to Debug the app starts emitting debug-messages as well, without restarting it.

This solution has one downside, you need physical access to the appsettings.json. Even if you do, it still would be better to not change the configuration file. What we want is a component that let us set and reset a temporary log level and if this temporary level is not active then the values from appsettings.json should be used. That way you can change the level from the GUI or via an HTTP request against the Web API of your web application.

Luckily, the implementation effort for that feature is pretty low, but that's for another day...