Logging traces in ASP.NET Core 2.0 with Application Insights

Application Insights (AI) is a great way to gather diagnostic information from your ASP.NET app, and with Core 2.0, itʼs even easier to get started. One thing that the tutorials don’t seem to cover is how to see your trace logs in AI. By “tracing”, I mean things like:

_logger.LogInformation("Loaded entity {id} from DB", id);

This can be an absolute life-saver during production outages. In simpler times, we might have used DebugView and a TraceListener to view this information (in our older, on-prem apps, we used log4net to persist some of this information to disk for more permanent storage). In an Azure world, this isn’t available to us, but AI in theory gives us a one-stop-shop for all our telemetry.

Incidentally, itʼs worth putting some care into designing your tracing strategy — try to establish conventions for message formats, what log levels to use when, and what additional data to record.

You can see the tracing data in the AI query view as one of the available tables:

tracing_table
The AI “traces” table

For our sample application, we’ve created a new website by using:

dotnet new webapi

We’ve then added the following code into ValuesController:

public class ValuesController : Controller
{
  private ILogger _logger;

  public ValuesController(ILogger<ValuesController> logger)
  {
    _logger = logger;
  }

  [HttpGet]
  public IEnumerable Get()
  {
    _logger.LogDebug("Loading values from {Scheme}://{Host}{Path}", Request.Scheme, Request.Host, Request.Path);
    return new string[] { "value1", "value2" };
  }
}

We have injected an instance of ILogger, and we’re writing some very basic debugging information to it (this is a contrived example as the framework already provides this level of logging).

Additionally, we’ve followed the steps to set our application up with Application Insights. This adds the Microsoft.ApplicationInsights.AspNetCore package, and the instrumentation key into our appsettings.json file. Now we can boot the application up, and make some requests.

If we look at AI, we can see data in requests, but nothing in traces (it may take a few minutes for data to show up after making the requests, so be patient). What’s going on?

By default, AI is capturing your application telemetry, but not tracing. The good news is that itʼs trivial to add support for traces, by making a slight change to Startup.cs:

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory logger)
{
  loggerFactory.AddApplicationInsights(app.ApplicationServices, LogLevel.Debug);
}

We have just added AI into our logging pipeline; the code above will capture all messages from ILogger, and, if they are at logging level Debug or above, will persist them to AI. Make some more requests, and then look at the traces table:

tracing_results1.PNG

One further thing that we can refine is exactly what we add into here. You’ll notice in the screenshot above that everything is going in: our messages, and the ones from ASP.NET. We can fine-tune this if we want:

loggerFactory.AddApplicationInsights(app.ApplicationServices, (category, level) =>
{
   return category.StartsWith("MyNamespace.") && level > LogLevel.Trace;
});

You’re free to do any filtering you like here, but the example above will only log trace messages where the logging category starts with one of your namespaces, and where the log level is greater than Trace. You could get creative here, and write a rule that logs any warnings and above from System.*, but everything from your own application.

Traces are one of the biggest life-savers you can have during a production issue — make sure that you donʼt lose them!

Advertisements

VSTS agile work item types explained

At Sun Branding Solutions, we use the Agile template for our work in Visual Studio Team Services. We’ve been using this for a while, but people still occasionally ask what the difference is between epics, features, stories and tasks, so here are my thoughts on when and why to use the available item types.

Diagram of work item levels

Epics

If work items were books, then epics are The Lord of the Rings. An epic should represent a strategic initiative for the organisation – for example, “let’s go mobile”, or adding a brand new area of functionality. They may also be technical in nature, such as moving to the cloud, or a major re-skinning, but they should always be aligned to business goals.

An epic probably won’t have much description, and certainly shouldn’t include detailed specifications (that’s up to the Product Owner and their team of Business Analysts to produce).

Epics will span several sprints. Try not to add too many to the backlog, as that cheapens their impact. You probably wouldn’t have more than a handful within a year.

Epics are split into multiple…

Features

Going with our book metaphor, these are an individual Harry Potter novel – something part of a greater whole, but still a self-contained unit of value. The feature is defined by the Product Owner, with the help of BAs, key customers and end users. An example might be “upload files from OneDrive” – or, to put it another way, a feature is a bullet point on your marketing literature, whilst an epic is a heading.

If the product roadmap is largely defined by your customers, then features are the things that your customers really care about. If your roadmap is internal, then features are what your managers care about.

A feature may span multiple sprints, although ideally not more than three.

Features are broken down into…

User Stories

“As an X, I want to Y, so that I can Z.”

The BA will typically break a feature down into the individual requirements, which become user stories. A user story should be a distinct chunk of work that can be delivered and provide some value to the product.

Do not be tempted to split stories into “database architecture”, “data access layer”, “UI” – stories should reflect user requirements. Use tasks for development work items.

Stories have acceptance criteria that represent the requirements the developers must meet. These are absolutely crucial, because they are how the output will be tested and judged.

At this level, we also have…

Bugs

Probably the easiest to define, a bug is a defect that has been noticed and triaged. Bugs have reproduction steps and acceptance criteria. Typically, your QA team will own bugs, but your Product Owner, BA or Product Consultant may also be interested in specific bugs that affect them or their customers.

Beware of people using bugs to slip in change requests – these should always be handled as User Stories instead!

Both User Stories and Bugs break down into…

Tasks

A task is the only type of work item that counts towards the burn-down. Tasks have original and current estimates, the amount of time put in, and the amount of time remaining (it’s the remaining time that counts to the burn-down).

Developers own tasks. It is down to the architects and developers to analyse the User Stories (ideally, they should have been involved in drafting the User Stories and even Features to ensure they have a good general understanding), and set up as many tasks as they think are required.

The level of detail in tasks will vary depending on the team – a small, closely-knit team can get away with titles only, whilst a larger team may need more detail.

Tasks have a work type, which defines what type of resource is required:

When setting the capacity of your team, you can select the type of activity that your team members have, and this will help you see how much work of each type you can accommodate within a sprint.

Using the TryGet pattern in C# to clean up your code

Modern C# allows you to declare output variables inline; this has a subtle benefit of making TryFoo methods more attractive and cleaning up your code. Consider this:

public class FooCollection : ICollection<Foo>
{
  // ICollection<Foo> members omitted for brevity
  public Foo GetFoo(string fooIdentity)
  {
    return this.FirstOrDefault(foo => foo.Identity == fooIdentity);
  }
}

// somewhere else in the code
var foo = foos.GetFoo("dave");
if (foo != null)
{
  foo.Fight();
}

Our GetFoo method will return a default of Foo if one isn’t found that matches fooIdentity — code that uses this API needs to know that null indicates that no matching item was found. This isn’t unreasonably, but it does mean that we’re using two lines to find and assign our matching object. Instead, let’s try this approach:

public class FooCollection : ICollection<Foo>
{
  public bool TryGetFoo(string fooIdentity, out Foo fighter)
  {
    figher = this.FirstOrDefault(foo => foo.Identity == fooIdentity);
    return fighter != null;
  }
}

We’ve encoded the knowledge that null means “not found” directly into our method, and there’s no longer any ambiguity about whether we found our fighter or not. Our calling code can now be reduced by a line:

if (foos.TryGetFoo("dave", out foo))
{
  foo.Fight();
}

It’s not a huge saving on its own, but if you have a class of several hundred lines that’s making heavy use of the GetFoo method, this can save you a significant number of lines, and that’s always a good thing.

Lose your self-respect with async

How many times have you written code like this in Java/TypeScript?

function saveData() {
  const self = this;
  $.post("/api/foo", function (response) {
    self.updated(true);
  }
}

The self variable is used to keep a reference to the real calling object so that, when the callback is executed, it can actually call back to the parent (if you used this, then its value would change as the callback executes).

Enter async! Suddenly, you can write:

function saveData() {
  await $.post("/api/foo");
  this.updated(true);
}

That’s an awful lot of lines that you’ve just saved across your codebase!

Ditching typings for NPM @types

Managing TypeScript type files (*.d.ts) for third-party libraries has been a pain for a while; in the distant past of last year, I used the NPM package tsd, which has since been superseded by typings. Neither of these felt particularly nice to use, and I’ve been passively searching for alternatives.

The other day, I found out that NPM has typings support built-in! For example, to install JQuery types, one simply runs:

npm install --save-dev @types/jquery

Boom! If we look under node_modules, we see:

  • node_modules
    • @types
      • jquery
        • index.d.ts

It’s a wonderful thing. But wait! Now TypeScript, or rather Visual Studio, doesn’t know where to find the typings! Horror, whatever shall we do? Not to worry, turns out that TypeScript has us covered. In tsconfig.json, we simply add:

{
  ...
  compilerOptions: {
    ...
    typeRoots: [ "node_modules/@types" ]
  }
  ...
}

(You may need to close/open the project a few times before VS gets the message!)

So it’s goodbye typings – one less tool to worry about on the chain.

Dependency Injection with Model State Validation in ASP.NET Core

(caveat: this may or may not work in normal ASP.NET, I haven’t tried it yet)

“Validation” is a wonderfully ambiguous term. Usually, in simple examples, it’s restricted to ensuring that required fields are present, that dates are in the correct format, and so on. All of these only require access to the object being validated, which can be very easily done using data annotations:

using System;
using System.ComponentModel.DataAnnotations;

public class ValidateMe
{
  [Required]
  [StringLength(50)]
  public string ImARequiredField { get; set; }
}

You can take this a step further if, for example, you want to run rules on the data in the object by implementing IValidatableObject. An example:

using System;
using System.ComponentModel.DataAnnotations;

public class ValidateMe : IValidatableObject
{
  [StringLength(50)]
  public string ImARequiredField { get; set; }

  public bool TheFieldIsRequired { get; set; }

  public IEnumerable<ValidationResult> Validate(ValidationContext validationContext)
  {
    if (TheFieldIsRequired)
    {
      yield return new ValidationResult("The field is required", new string[] { nameof(ImARequiredField) });
    }
  }
}

Great! Now we can wire this up in our controller:

public IActionResult DoStuff(ValidateMe model)
{
  if (ModelState.IsValid)
  {
    // do stuff!
  }
  else
  {
    return View(model);
  }
}

Fantastic! But wait, what’s this? Oh dear, it’s the Real World(TM) come to burst our bubble.

Let’s take a more realistic example. Let’s say that we have a model that needs to set up a new user account. In our system, email addresses must be unique, and so we need to validate this before saving the user. We want to have a nice, clean, de-coupled architecture with injected dependencies, but where can we inject the dependencies into our validation system? Without them, we’d end up like this:

public IActionResult AddUser(AddUserModel model, [FromServices] IEmailValidator emailValidator)
{
  if (!emailValidator.IsEmailAvailable(model.Email))
  {
    ModelState.AddModelError("Email", "Email address is taken");
  }

  if (ModelState.IsValid)
  {
    // do stuff!
  }
  else
  {
    return View(model);
  }
}

Well, that’s ugly – it’s not terrible, but it’s bloating our controller, and, it turns out, we don’t need to do it at all, because enter ValidationContext!

public IEnumerable<ValidationResult> Validate(ValidationContext validationContext)
{
  // magic happens here!
  var emailValidator = validationContext.GetService(typeof(IEmailValidator));

  // now I has dependency, I can use ftw!
  if (!emailValidator.IsEmailAvailable(Email))
  {
    yield return new ValidationResult("Email address is taken", new string[] { nameof(Email) });
  }
}

Without doing anything special, ASP.NET is clever enough to pass its IServiceProvider into ValidationContext, and so we now have access to any dependencies that our application needs. The only slight downside is that the Validate method isn’t async, and so we’ll need to wrap any async calls in Task.Run(...) calls, but that’s a small price to pay for keeping our controllers slim.

SUNrise architecture – Part 1

I recently promised Twitter that I’d blog about the architecture of SUNrise.

Caveat: this won’t be an exhaustive document, partly for reasons of confidentiality, and partly because I’m writing this in the hour or so between putting my son to bed and the time where the tumbler of whisky next to me finally sends me to sleep.

A bit of background: SUNrise is an enterprise Graphics Lifecycle Management platform, written in-house at Sun Branding Solutions using (primarily) .NET and Microsoft Azure. It replaces our previous flagship product, ODIN, which dates from the early 2000s and runs on a combination of COM+, VB6 and Microsoft Project (yes, really). We began writing SUNrise in late 2013, although prototypes were kicking around since late 2012.

We were fortunate enough to start work on SUNrise just as Microsoft Azure was becoming an attractive platform. We began using Cloud Services, but have since moved to using App Services (formerly Websites).

technology_map
SUNrise technology map

Our core data platform is SQL Azure, although we are increasingly starting to add de-normalized lookup data into Azure Table Storage, and we maintain a separate search index using Azure Search. We maintain (and have had to use!) a read-only replica of our core database in another region, and, apart from failover in the event of SQL Azure outages (yes, they have happened), we use this as the data source for our ETL processes to avoid clogging up the transactional system.

sunrise-logical-architecture
Diagram showing the logical architecture of SUNrise

By using Azure Blob Storage, we can provide highly scalable binary storage for our clients without having to buy the storage up-front – we pay for what we use and factor this into our pricing model. Using the storage APIs means that we can simply keep pushing files into storage, without worrying about folder or file limits or path length issues – under the covers, we simply assign each binary file a GUID, and use that as the URI pattern.

SUNrise is a distributed application, and consists of several websites, hosting various parts of the logical application from the core customer-facing website, to the login screen or the integration API. Each of these are hosted as an App Service, backed by a hosting plan with a minimum of two instances. The diagram below shows how we use a mixture of hosting plans of various sizes, from a single large plan running a single site, to a smaller plan hosting a mixture of lesser-used or less-intensive applications. The joy of using Azure is that changing this layout is a matter of configuration rather than ordering new physical hardware.

hosting_plans
App service hosting plan layout

As well as customer-facing web applications, we run background processes on WebJobs, using a mixture of continuous and scheduled processes. By abstracting the plumbing of receiving queue messages, we have been able to write durable and reliable queue processors and task schedulers with very little code (the actual processes that they kick off are a different matter!).

Other technologies used include Redis Cache, SSIS, ApplicationInsights and Cloud Services.

The application is written in .NET 4.6, but we are actively porting our code to .NET Core 1.0 (a process that we’ll complete with the release of Visual Studio 2017). We aim to use mostly POCOs and don’t rely on any one framework; to that end, we’re gradually moving towards Dapper for data access, and away from Entity Framework.

We use a wide range of .NET technologies, from EF to WF, WCF, WebAPI, MVC and Razor.

The “secret sauce” of SUNrise is the workflow engine, a custom-built domain-specific language for modelling and running our customers’ business processes. For scalability, we run most of the intensive processing for this in a background WebJob using Storage Queues to pass messages between the application and processing tiers. By using a competing consumers model and idempotent/stateless messages, we can easily scale this system up by increasing the number of instances within the Hosting Plan, and in fact we do this automatically, so that higher volumes of messages in the queue will spin up more servers to handle them.

Development Environment

Our developers all run Visual Studio 2015 Enterprise Edition, and work on their own laptops against local databases. We use a combination of SSDT and FluentMigrator to model our data layer, the upshot of which is that developers can build a local copy of the database with a single command, so that each developer is working on their own segregated data island. Where certain services aren’t available locally (such as Azure Search), we emulate them using the next-best equivalent (in the case of Azure Search, we run ElasticSearch locally to provide a similar environment, and perform more detailed integration testing against a testing Azure subscription).

We all run SQL Server 2016 Developer Edition on our machines, as well as the Azure Storage Emulator, and various services such as Redis and Papercut (to emulate local SMTP).

We host our source code, CI builds, release and test automation on Visual Studio Team Services, but that’s a subject for another blog post!