Using Dapper Output Params for Optimistic Concurrency

Ever had a problem with two users editing the same record? Maybe one of them overwrote the other’s changes.

The answer to this is optimistic concurrency, which is a fancy term for the practice where each entity checks, before saving, that no-one else has updated the record since it was originally loaded.

As an aside, “pessimistic concurrency” is so-called because under this model, the records are locked when someone opens them for editing, and unlocked once the record is saved or the changes discarded. Optimistic concurrency only checks for changes at the point of saving.

In practical terms, this involves adding a column into your SQL database table; this column is updated each time the row is updated. You can do this manually, but SQL Server gives it to you for free using the rowversion data type.

  Id int identity not null primary key,
  Name nvarchar(200) not null,
  StartDate datetime not null,
  EndDate datetime null,
  ConcurrencyToken rowversion not null

The ConcurrencyToken field is technically a byte array, but can be cast to a bigint if you want a readable representation. It’s simply a value that increments every time that the individual row changes.

Let’s say we create a .NET object for this. It looks like this:

public class Employee {
  public int Id { get; set; }
  public string Name { get; set; }
  public DateTime StartDate { get; set; }
  public DateTime? EndDate { get; set; }
  public byte[] ConcurrencyToken { get; set; }

Using Dapper, we can write the following simple data access layer:

public class EmployeeDAL
  private DbConnection _connection; // assume we've already got this

  public async Task<Employee> GetAsync(int id)
    const string Sql = "SELECT * FROM Employee WHERE Id = @id";
    return _connection.SingleOrDefaultAsync(Sql, new { id });

  public async Task<Employee> InsertAsync(Employee employee)
    const string Sql = @"
INSERT INTO Employee ( Name, StartDate, EndDate )
VALUES ( @Name, @StartDate, @EndDate )";

    await _connection.ExecuteAsync(Sql, employee);
    return employee;

  public async Task<Employee> UpdateAsync(Employee employee)
    const string Sql = @"
  Name = @Name,
  StartDate = @StartDate,
  EndDate = @EndDate
WHERE Id = @Id";

    await _connection.ExecuteAsync(Sql, employee);
    return employee;

That’s great, but if you ran this, you’d notice that, on inserting or updating the Employee, the Id and ConcurrencyToken fields don’t change. To do that, you’d have to load a new version of the object. Also, the concurrency field isn’t actually doing anything – what’s that about?

Let’s make some changes. In our UpdateAsync method, let’s do:

public async Task<Employee> UpdateAsync(Employee employee)
  const string Sql = @"
 Name = @Name,
 StartDate = @StartDate,
 EndDate = @EndDate
WHERE Id = @Id
AND ConcurrencyToken = @ConcurrencyToken";

  var rowCount = await _connection.ExecuteAsync(Sql, employee);
  if (rowCount == 0)
    throw new Exception("Oh no, someone else edited this record!");

  return employee;

This is crude, but we now can’t save the employee if the ConcurrencyToken field that we have doesn’t match the one in SQL. Dapper is taking care of mapping our object fields into parameters for us, and we can use a convention in our SQL that our parameter names will match the object fields.

However, we still won’t update the concurrency token on save, and when inserting we still don’t know what the ID of the new employee is. Enter output parameters!

public async Task<Employee> InsertAsync(Employee employee)
  const string Sql = @"
INSERT INTO Employee ( Name, StartDate, EndDate )
VALUES ( @Name, @StartDate, @EndDate )

SELECT @Id = Id, @ConcurrencyToken = ConcurrencyToken

  var @params = new DynamicParameters(employee)
    .Output(employee, e => e.ConcurrencyToken)
    .Output(employee, e => e.Id);

  await _connection.ExecuteAsync(Sql, @params);
  return employee;

Now after saving, the employee object will have updated Id and ConcurrencyToken values. We’ve used DynamicParameters instead of an object, which has allowed us to map explicit Output params to update the target object. How does it work? You give Dapper a target object, and a function to tell it which property to update. It then looks for a SQL output parameter or result that matches that property, and then uses that knowledge to update the property based on the results from SQL.



Thinking about microservices? Start with your database

Hang on, the database? Ahem, I think you’ll find that service contexts should be vertical, not horizontal, that is, based on function, not technology. So, nyah-nyah.

Let me explain.

This post isn’t for you if you happen to be Netflix. Nor is it for you if you’re writing a greenfield application with microservices in mind. No, this post is for the team maintaining a chunk of EnterpriseWare, something relatively dull yet useful and (presumably) profitable. This team has been toying with the idea of microservices, and may well be drawing up elaborate Visio diagrams to define the new architecture.

So, what does this have to do with databases?

A database is a service. In Windows land, it literally is a Windows service (mssql.exe, in point of fact), but in more general terms, it provides a service for managing an application’s data.

A database with a relatively simple schema, where the application layer controls most of the SQL code and most operations are simple CRUD, is more like a RESTful API. A database with a more complicated schema (parent-subcategories, multiple tables involved in atomic transactions) that provides complex stored procedures for the application(s) to use is more like a traditional RPC service. Either way, these things are services with a well-known API that the application uses.

Behold, the monolith!

Your basic monlithic codebase will hopefully include both the source code for the database and the application.

If every release requires your DBA to manually run ad-hoc SQL scripts, then you have far bigger problems to address than microservices. Version your migrations and set up continuous deployment before you start looking to change architecture!

Typically, a new version of the product will involve both database changes, and application changes, for example: adding a new field to a business entity, which may touch the table schema, stored procedure(s), the application tier and the UI layer. Without the schema changes, the app will not function. Without the application changes, the schema changes are, at best, useless, and at worst, broken.

Therefore, deployments involve taking the application down, migrating the database (usually after taking a backup), and then deploying the application code.


Microservices imply an application that composes multiple, independent services into a cohesive whole, with the emphasis being on “independent”. For a team with no real experience of this, a useful place to start is the database. Not only will it teach you valuable lessons about managing multiple parts of your application separately, but, even if you don’t decide to go down the microservice rabbit-hole, you will still have gained value from the exercise.

So, what exactly are we talking about? In practical terms:

  • Create a new repository for your database source code.
  • Create a separate continuous integration/deployment pipeline for your database.
  • Deploy your database on its own schedule, independent of the application.
  • Ensure that your database is always backwards-compatible with all live versions of your application.

Now, this last part is the hardest to do, and there’s no silver bullet or magic technique that will do it for you, but it is crucial that you get it right.

Whenever you make a schema change, whether that be a change to a table or stored procedure or whatever, then that change must not break any existing code. This may mean:

  • Using default values for new columns
  • Using triggers or stored procedures to fake legacy columns that are still in use
  • Maintaining shadow, legacy copies of data in extreme circumstances
  • Creating _V2, _V3 etc. stored procedures where the parameters or behaviour changes

The exact techniques used will depend on the change, and must be considered each time the schema changes. After a while, this becomes a habit, and ideally more and more business logic moves into the application layer (whilst storage and consistency logic remains the purview of the database).

Let’s take the example of adding a new column. In this new world, we simply add a database migration to add the new column, and either ensure that it’s nullable, or that it has a sensible default value. We then deploy this change. The application can then take advantage of the column.

Let’s take stored procedures. If we’re adding new optional parameters, then we can just change the procedure, since this is a safe change. If, however, we are adding new required parameters, or removing existing ones, we would create MyProcedure_V2, and deploy it.

Let’s say we want to remove a column – how do we do this? The simple answer is that we don’t, until we’re sure that no code is relying on it. We instead mark it as obsolete wherever we can, and gradually trim off all references until we can safely delete it.

And this benefits us… how?

The biggest benefit to this approach, besides training for microservices, is that you should now be able to run multiple versions of your application concurrently, which in turn means you can start deploying more often to preview customers and get live feedback rather than waiting for a big-bang, make-or-break production deployment.

It also means that you’re doing less during your deployments, which in turn makes them safer. In any case, application-tier deployments tend to be easier, and are definitely easier to roll back.

By deploying database changes separately, the riskier side of change management is at least isolated, and, if all goes well, invisible to end users. By their very nature, safe, non-breaking changes are also usually faster to run, and may even require no downtime.

Apart from all that, though, your team are now performing the sort of change management that will be required for microservices, without breaking up the monolith, and without a huge amount of up-front investment. You’re learning how to manage versions, how to mark APIs as obsolete, how to keep services running to maintain uptime, and how to run a system that isn’t just a single “thing”.


If your team has up until now been writing and maintaining a monolith, you aren’t going to get to serverless/microservice/containerised nirvana overnight. It may not even be worth it.

Rather than investing in a large-scale new architecture, consider testing the water first. If your team can manage to split off your database layer from your main application, and manage its lifecycle according to a separate deployment strategy, handling all versioning and schema issues as they arise, then you may be in a good position to do that for more of your components.

On the other hand, maybe this proves too much of a challenge. Maybe you don’t have enough people to make it work, or your requirements and schema change too often (very, very common for internal line-of-business applications). There’s nothing wrong with that, and it doesn’t imply any sort of failure. It does mean that you will probably struggle to realise any benefit from services, micro or otherwise, and my advice would be to keep refactoring the monolith to be the best it can be.

ASP.NET Core automatic type registration

A little bit of syntactic sugar for you this Friday!

Let’s say we have an application that uses a command pattern to keep the controllers slim. Maybe we have a base command class that looks a bit like:

public abstract class CommandBase<TModel> where TModel : class
  protected CommandBase(MyDbContext db)
    Db = db;

  protected MyDbContext Db { get; }

  Task<CommandResult> ExecuteAsync(TModel model);

Using a pattern like this means that we can have very slim controller actions where the logic is moved into business objects:

public async Task<IActionResult> Post(
  [FromServices] MyCommand command,
  [FromBody] MyCommandModel model)
  if (!ModelState.IsValid)
    return BadRequest(ModelState);
  var result = await command.ExecuteAsync(model);
  return HandleResultSomehow(result);

We could slim this down further using a validation filter, but this is good enough for now. Note that we’re injecting our command model in the action parameters, which makes our actions very easy to test if we want to.

The problem here is that, unless we register all of our command classes with DI, this won’t work, and you’ll see an `Unable to resolve service for type` error. Registering the types is easy, but it’s also easy to forget to do, and leads to a bloated startup class. Instead, we can ensure that any commands which are named appropriately are automatically added to our DI pipeline by writing an extension method:

public static void AddAllCommands(this IServiceCollection services)
  const string NamespacePrefix = "Example.App.Commands";
  const string NameSuffix = "Command";

  var commandTypes = typeof(Startup)
    .Where(t =>
      t.IsClass &&
      t.Namespace?.StartsWith(NamespacePrefix, StringComparison.OrdinalIgnoreCase) == true &&
      t.Name?.EndsWith(NameSuffix, StringComparison.OrdinalIgnoreCase) == true);

  foreach (var type in commandTypes)

Using this, we can use naming conventions to ensure that all of our command classes are automatically registered and made available to our controllers.

Logging traces in ASP.NET Core 2.0 with Application Insights

Application Insights (AI) is a great way to gather diagnostic information from your ASP.NET app, and with Core 2.0, itʼs even easier to get started. One thing that the tutorials don’t seem to cover is how to see your trace logs in AI. By “tracing”, I mean things like:

_logger.LogInformation("Loaded entity {id} from DB", id);

This can be an absolute life-saver during production outages. In simpler times, we might have used DebugView and a TraceListener to view this information (in our older, on-prem apps, we used log4net to persist some of this information to disk for more permanent storage). In an Azure world, this isn’t available to us, but AI in theory gives us a one-stop-shop for all our telemetry.

Incidentally, itʼs worth putting some care into designing your tracing strategy — try to establish conventions for message formats, what log levels to use when, and what additional data to record.

You can see the tracing data in the AI query view as one of the available tables:

The AI “traces” table

For our sample application, we’ve created a new website by using:

dotnet new webapi

We’ve then added the following code into ValuesController:

public class ValuesController : Controller
  private ILogger _logger;

  public ValuesController(ILogger<ValuesController> logger)
    _logger = logger;

  public IEnumerable Get()
    _logger.LogDebug("Loading values from {Scheme}://{Host}{Path}", Request.Scheme, Request.Host, Request.Path);
    return new string[] { "value1", "value2" };

We have injected an instance of ILogger, and we’re writing some very basic debugging information to it (this is a contrived example as the framework already provides this level of logging).

Additionally, we’ve followed the steps to set our application up with Application Insights. This adds the Microsoft.ApplicationInsights.AspNetCore package, and the instrumentation key into our appsettings.json file. Now we can boot the application up, and make some requests.

If we look at AI, we can see data in requests, but nothing in traces (it may take a few minutes for data to show up after making the requests, so be patient). What’s going on?

By default, AI is capturing your application telemetry, but not tracing. The good news is that itʼs trivial to add support for traces, by making a slight change to Startup.cs:

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory logger)
  loggerFactory.AddApplicationInsights(app.ApplicationServices, LogLevel.Debug);

We have just added AI into our logging pipeline; the code above will capture all messages from ILogger, and, if they are at logging level Debug or above, will persist them to AI. Make some more requests, and then look at the traces table:


One further thing that we can refine is exactly what we add into here. You’ll notice in the screenshot above that everything is going in: our messages, and the ones from ASP.NET. We can fine-tune this if we want:

loggerFactory.AddApplicationInsights(app.ApplicationServices, (category, level) =>
   return category.StartsWith("MyNamespace.") && level > LogLevel.Trace;

You’re free to do any filtering you like here, but the example above will only log trace messages where the logging category starts with one of your namespaces, and where the log level is greater than Trace. You could get creative here, and write a rule that logs any warnings and above from System.*, but everything from your own application.

Traces are one of the biggest life-savers you can have during a production issue — make sure that you donʼt lose them!

VSTS agile work item types explained

At Sun Branding Solutions, we use the Agile template for our work in Visual Studio Team Services. We’ve been using this for a while, but people still occasionally ask what the difference is between epics, features, stories and tasks, so here are my thoughts on when and why to use the available item types.

Diagram of work item levels


If work items were books, then epics are The Lord of the Rings. An epic should represent a strategic initiative for the organisation – for example, “let’s go mobile”, or adding a brand new area of functionality. They may also be technical in nature, such as moving to the cloud, or a major re-skinning, but they should always be aligned to business goals.

An epic probably won’t have much description, and certainly shouldn’t include detailed specifications (that’s up to the Product Owner and their team of Business Analysts to produce).

Epics will span several sprints. Try not to add too many to the backlog, as that cheapens their impact. You probably wouldn’t have more than a handful within a year.

Epics are split into multiple…


Going with our book metaphor, these are an individual Harry Potter novel – something part of a greater whole, but still a self-contained unit of value. The feature is defined by the Product Owner, with the help of BAs, key customers and end users. An example might be “upload files from OneDrive” – or, to put it another way, a feature is a bullet point on your marketing literature, whilst an epic is a heading.

If the product roadmap is largely defined by your customers, then features are the things that your customers really care about. If your roadmap is internal, then features are what your managers care about.

A feature may span multiple sprints, although ideally not more than three.

Features are broken down into…

User Stories

“As an X, I want to Y, so that I can Z.”

The BA will typically break a feature down into the individual requirements, which become user stories. A user story should be a distinct chunk of work that can be delivered and provide some value to the product.

Do not be tempted to split stories into “database architecture”, “data access layer”, “UI” – stories should reflect user requirements. Use tasks for development work items.

Stories have acceptance criteria that represent the requirements the developers must meet. These are absolutely crucial, because they are how the output will be tested and judged.

At this level, we also have…


Probably the easiest to define, a bug is a defect that has been noticed and triaged. Bugs have reproduction steps and acceptance criteria. Typically, your QA team will own bugs, but your Product Owner, BA or Product Consultant may also be interested in specific bugs that affect them or their customers.

Beware of people using bugs to slip in change requests – these should always be handled as User Stories instead!

Both User Stories and Bugs break down into…


A task is the only type of work item that counts towards the burn-down. Tasks have original and current estimates, the amount of time put in, and the amount of time remaining (it’s the remaining time that counts to the burn-down).

Developers own tasks. It is down to the architects and developers to analyse the User Stories (ideally, they should have been involved in drafting the User Stories and even Features to ensure they have a good general understanding), and set up as many tasks as they think are required.

The level of detail in tasks will vary depending on the team – a small, closely-knit team can get away with titles only, whilst a larger team may need more detail.

Tasks have a work type, which defines what type of resource is required:

When setting the capacity of your team, you can select the type of activity that your team members have, and this will help you see how much work of each type you can accommodate within a sprint.

Using the TryGet pattern in C# to clean up your code

Modern C# allows you to declare output variables inline; this has a subtle benefit of making TryFoo methods more attractive and cleaning up your code. Consider this:

public class FooCollection : ICollection<Foo>
  // ICollection<Foo> members omitted for brevity
  public Foo GetFoo(string fooIdentity)
    return this.FirstOrDefault(foo => foo.Identity == fooIdentity);

// somewhere else in the code
var foo = foos.GetFoo("dave");
if (foo != null)

Our GetFoo method will return a default of Foo if one isn’t found that matches fooIdentity — code that uses this API needs to know that null indicates that no matching item was found. This isn’t unreasonably, but it does mean that we’re using two lines to find and assign our matching object. Instead, let’s try this approach:

public class FooCollection : ICollection<Foo>
  public bool TryGetFoo(string fooIdentity, out Foo fighter)
    figher = this.FirstOrDefault(foo => foo.Identity == fooIdentity);
    return fighter != null;

We’ve encoded the knowledge that null means “not found” directly into our method, and there’s no longer any ambiguity about whether we found our fighter or not. Our calling code can now be reduced by a line:

if (foos.TryGetFoo("dave", out foo))

It’s not a huge saving on its own, but if you have a class of several hundred lines that’s making heavy use of the GetFoo method, this can save you a significant number of lines, and that’s always a good thing.

Lose your self-respect with async

How many times have you written code like this in Java/TypeScript?

function saveData() {
  const self = this;
  $.post("/api/foo", function (response) {

The self variable is used to keep a reference to the real calling object so that, when the callback is executed, it can actually call back to the parent (if you used this, then its value would change as the callback executes).

Enter async! Suddenly, you can write:

function saveData() {
  await $.post("/api/foo");

That’s an awful lot of lines that you’ve just saved across your codebase!