Continuous deployment

When talking about Extreme Programming (XP) or agile software development, most talks focus on the coding techniques and practices, unit testing, the iterations and sprints. However, there’s a few other very important pieces of the puzzle, one of them being continuous deployment.

Due to the iterative nature of agile, we often end up doing commits quite often, and when we commit often, we’d like to be able to see the results on an environment different from our own. This is where continuous integration and deployment solutions come in very handy.

The basic idea is simple: you setup a continuous integration server to check your version control and see if there’s any new version to fetch. When it finds something, it gets the latest version and builds it. You can set it up to run source code analysis, run unit tests, etc. If everything passes, you can then use a continuous deployment tool to install the new version of your software to a test server.

There’s further steps you can take to automate this, such as running some automated UI tests on the test server, then if they all pass, promote (deploy) the build to a staging server where you can do more tests (even manual tests from QA or the customer).

Although quite new (compared to Amazon’s EC2), Microsoft’s Windows Azure is catching up quite quickly. The web administration portal is quite friendly, and it’s quite fast to setup a few VM’s to setup a simple continuous deployment proof of concept.

I wrote a simple web project: a web chatting page using SignalR. I’ve been also looking a lot into Nancy lately, so I wanted to see if it there’s any problems integrating SignalR with Nancy (there wasn’t any). After writing the simple implementation, I created a GitHub repo and pushed the code there.

I created a small VM to host the continuous integration server. I chose TeamCity because I’ve used it before and it’s quite simple to setup. Everything was quite straight forward. I installed it, created a new project, and setup a Visual Studio (sln) build step. You will probably need to set it up to host the NuGet server, which is necessary for publishing the build.

The build failed the first couple of times because I didn’t have the web targets file (the build log will give you the exact path you need to copy it from your development PC to the build server) and I forgot to enable NuGet package restore. Then I got another error saying that I NuGet won’t install packages without confirmation, but that was easily fixed by creating a system-wide environment variable named EnableNuGetPackageRestore that you need to set to true.

The project now builds, so it’s time to deploy it. I installed the Octopus server on the same VM as TeamCity. You will also have to install the Octopus TeamCity plugin, which helps integrating the two solutions. I created a new VM to act as the Test server and installed an Octopus tentacle on it. After a bit of simple setup, and adding a publish step, as well as a trigger step on the TeamCity build, everything runs smoothly.

I only have to push a new version to GitHub, and a minute later, it’s already deployed on the test server.

Automating tasks such as building and deploying the latest versions to various development and test servers, as well as running tests are tasks that can save a lot of time and effort. This setup was pretty straight-forward so I won’t go into more details in this post. I’ll continue expanding it, and will detail on how to set TeamCity to run unit tests and automated UI tests.

If you want to try this for yourself and don’t want to take the time to create a simple website to test it, you can use my GitHub repository here.

What CI/CD solutions do you use?

Test Driven Development techniques part 4: reversing the file

Although I promised I’ll do some refactoring, I’ve decided against it. It doesn’t make much sense, the solution is too small, and there’s little to no benefit to refactoring it at all. Instead, I’ll just finish up the series on this post and come back to refactoring techniques later.

This will be a short post, because implementing the reverse feature didn’t require any new techniques. I just used a few mocks here, a few stubs there with a few interfaces. Everything’s quite straight forward. Instead of going through all those changes, I’ll write a couple of thoughts on how I would improve this if it were production code. In no particular order:

  1. Error checking. There’s no handling of exceptions, and that’s dangerous. What if the file is too large and we run out of memory (especially since we’re reading the whole content as a string)? What if the file is deleted between the time we check whether it exists and the time we actually try to read it? I’m sure there’s other exceptions and edge cases, but you get the point.
  2. Logging. There’s none, and logging is good. Especially for detailing error messages/exceptions.
  3. Splitting the contracts/interface from their implementation. Normally, I’d have a different library project for the contracts. It’s cleaner, and many times you only need to reference/depend on  the contracts.
  4. Refactor all the code in the Main method to a different class, and test that. There’s no testing for all that code so we have no guarantee we actually use our otherwise tested code.

You can check the full source code over at my github repo. How would you improve on the current code?

Test Driven Development techniques part 3: Dependency Injection

This post will be a bit different than the previous two in this series. It’ll be less about the actual code and more about Dependency Injection and the general design of the code I wrote so far.

When writing code using TDD, I end up with a lot of interfaces. This is a good thing, because it helps me to adhere to quite a few of the SOLID principles, specifically the Single responsibility principle (SRP), the Interface segregation principle (ISP) – which go hand in hand.

Dependency Injection comes to help with tying it all together. Because everything depends on interfaces (abstractions, according to the Dependency inversion principle), there will be places that need to create objects (new FileChecker(), new ConcreteValidator(), etc). If they are scattered around the codebase, it will create a lot of inter-dependency between the layers and the actual concrete types.

For example, if I’m using a certain UI library, and I need to create FancyButtons in multiple places then instead of doing that, I will choose to create a IButton interface and pass it somehow (constructor, properties, etc). Then I’ll have a factory, builder, container – some sort of object that is the single point of requesting such objects. This will help me change to FancierButton without having to touch multiple files.

This is where DI comes in. It’s simply a way to simplify object creation. The pattern is quite similar to classic factory patterns. You simply register your type and interface, and the DI will know to resolve them upon request.

For this project, I chose a very simple DI library called Autofac (it’s available via nuget as well). The main idea of using it is simple: you create a container builder, register the types you will be using and then compile them. The result is a container that you can use to resolve dependencies:

private static IContainer SetupDependencyInjection()
{
    var builder = new ContainerBuilder();
    builder.RegisterType<ConsoleOutputter>().As<IOutputter>();
    builder.RegisterType<ConsoleInput>().As<IInput>();
    return builder.Build();
}

It’s got a nice fluent interface you can use to register types, and it’s easy to read. It’s worth mentioning that if building a type depends on another type (such as, takes it as a parameter for its constructor), Autofac will automatically detect it and resolve it. Another nice feature is auto-wiring; the above code can be changed to simply let Autofac find the types and interfaces automatically using reflection:

private static IContainer SetupDependencyInjection()
{
    var builder = new ContainerBuilder();
    builder.RegisterAssemblyTypes(Assembly.GetExecutingAssembly()).AsImplementedInterfaces();
    return builder.Build();
}

It’s quite a bit slower, but most applications only need to do this once, at startup. Depending on the type of the application, you might want to setup the registrations manually, as an optimization – or whenever you have more than one concrete type for your interface, in which case you need to do a bit more setup. But for our current project, this will do perfectly.

The new main method looks a lot cleaner now:

static int Main()
{
    var container = SetupDependencyInjection();
    using (var scope = container.BeginLifetimeScope())
    {
        var fileReverser = scope.Resolve<IFileReverser>();

        fileReverser.PromptForInput();

        var inputFile = fileReverser.ReadInput();
        var validationResult = fileReverser.ValidateInput(inputFile);
        if (validationResult.HasValue)
            return validationResult.Value;

        return 0;
    }
}

Before using DI, the main method was responsible of creating about 7 concrete classes. That’s quite a lot of dependencies that I could remove using a DI library. You can check the full source code over at my github repo.

Next time, I’ll briefly show the implementation of the third story (prompting for and reading the output file name) and try to refactor the code a bit.

Test Driven Development techniques part 2: Console.Read and Validation

In my previous post, I started working on a simple application that reverses the contents of a text file. I completed the first story, prompting for the input file. Next story reads:

 * 2. The program should read a line from standard input which represents the full path to the file.
 *      - if the file does not exist, the program should return 1
 *      - if the file is not a text file, the program should return 2
 *      - if the program cannot open or read the file, it should return 3
 *      - the file can be empty, and the program should just output an empty file

I’ll start with the first bit, reading the file. With the confidence and experience I gained from the previous story, I can write the following test:

[Fact]
public void FileReverser_reads_file_input()
{
    // Given
    var inputMock = new Mock<IInput>();
    inputMock.Setup(i => i.Read()).Verifiable();
    var fileReverser = new App.FileReverser(null, inputMock.Object);

    // When
    fileReverser.ReadInput();

    // Then
    inputMock.Verify();
}

As previously, it doesn’t compile so I create the IInput interface with a Read method, add a private member in FileReader, update its constructor and create an empty ReadInput method. I run the tests and sure enough, the new one fails. I add a return _input.Read() statement in the method and the test is now green. I am now confident the changes are fine.

There’s two things that cross my mind now: first off, the application’s runtime doesn’t reflect this new story at all (I don’t even have a class that implements IInput). Secondly, I need to somehow verify the input I read, but I’m unsure where it will be verified. I don’t think it’s a good idea to verify it in the ReadInput method because the method will have too many different responsibilities. Maybe if I had a validator that I could test separately, I could see it clearer. I’ll do that next.

The validator should check the four cases described in the story. It should take a string representing the file path. Sounds simple enough. Let’s write a test for it. I’ll create a new test class to keep things clean.

[Fact]
public void Validator_returns_1_if_file_does_not_exist()
{
    // Given
    var fileName = string.Empty;
    var fileChecker = new Mock<IFileChecker>();
    fileChecker.Setup(fc => fc.FileExists(fileName)).Returns(false).Verifiable();
    var validator = new InputFileValidator(fileChecker, fileName);

    // When
    var result = validator.Validate();

    // Then
    result.Should().Be(1);
    fileChecker.Verify();
}

That was harder than I thought, but I’m confident it’s not too bad. I know I need a FileChecker interface because, similarly with the Console, I don’t want to go outside of my code to test – and definitely not hit the operating system.

Because I’ve been doing big steps since the start of this post, I’ve started being somewhat scared I might break something. Whenever I realize this, I try to stop and change the TDD technique to “Fake it”. Here’s what I ended up with:

[Fact]
public void Validator_returns_1_if_file_does_not_exist()
{
    // Given
    var fileName = string.Empty;
    var fileChecker = new Mock<IFileChecker>();
    fileChecker.Setup(fc => fc.FileExists(fileName)).Returns(false).Verifiable();
    var validator = new InputFileValidator(fileChecker.Object, fileName);

    // When
    var result = validator.Validate();

    // Then
    result.Should().Be(1);
    fileChecker.Verify();
}
//...

public class InputFileValidator
{
private readonly IFileChecker _fileChecker;

public InputFileValidator(IFileChecker fileChecker, string fileName)
{
    _fileChecker = fileChecker;
}

public int Validate()
{
    _fileChecker.FileExists(string.Empty);
    return 1;
}
}

public interface IFileChecker
{
bool FileExists(string fileName);
}

Basically, I did the minimum amount of work to make my test pass. I’m quite certain the code isn’t right, and I’m going to write a new test to prove it. I’ll start by addressing the fact that the validator doesn’t seem to save/use the filename it gets. For this, I could write another test, but I know it will look exactly like this one, except it will have a different string as the filename. There’s an xUnit extension for that, and it has something called theories and inline data. Basically, I can write a test that takes a parameter and then xUnit runs that test multiple times, once for every InlineData attribute:

[Theory]
[InlineData("")]
[InlineData(@"some\path")]
public void Validator_returns_1_if_file_does_not_exist(string fileName)
{
    // Given
    var fileChecker = new Mock<IFileChecker>();
    fileChecker.Setup(fc => fc.FileExists(fileName)).Returns(false).Verifiable();
    var validator = new InputFileValidator(fileChecker.Object, fileName);

    // When
    var result = validator.Validate();

    // Then
    result.Should().Be(1);
    fileChecker.Verify();
}

This test will fail for the second inline data attribute. I run it and it does, indeed. Time to fix the validator. It’s a simple fix, and it works. Next up, the “return 1”. We need a new test:

[Fact]
public void Validator_returns_0_if_file_exists()
{
    // Given
    var fileName = string.Empty;
    var fileChecker = new Mock<IFileChecker>();
    fileChecker.Setup(fc => fc.FileExists(fileName)).Returns(true).Verifiable();
    var validator = new InputFileValidator(fileChecker.Object, fileName);

    // When
    var result = validator.Validate();

    // Then
    result.Should().Be(0);
    fileChecker.Verify();
}

Sure enough, the test fails. However, I realize that if I change the return 1 statement to a return 0, the other test will fail – but I don’t know for sure if it fails because I changed the return statement or because it’s not getting passed the right string. I’ve done something bad: testing two different things with one test. I need to change that. I’ll remove the result.Should().Be(1) from my previous test and only keep the mock verification. That should make that test pass. And it does. I should also rename the method to better reflect that it tests.

Now, all my tests pass. This is a mistake, I’ll write a new test that checks for return 1. I won’t paste it here because it’s identical to the returns_0 one, except the mock returns false and the result is checked against 1. It looks like it’s finally time to fix the Validate method: return _fileChecker.FileExists(_fileName) ? : 0 : 1. All tests pass.

Next up, I could either go to the next validator or try to integrate my working validator with the rest of the project. I have a feeling writing the next validator will shed some light on the overall design of validators, so I’ll do that first.

The implementation is quite trivial, following the example of our previous validator, so I’ll just show you the new set of three tests that I wrote:

[Theory]
[InlineData("")]
[InlineData(@"some\path")]
public void Validator_passes_the_correct_filename_to_the_textFileChecker(string fileName)
{
    // Given
    var textFileCheckerMock = new Mock<ITextFileChecker>();
    textFileCheckerMock.Setup(tfc => tfc.IsTextFile(fileName)).Verifiable();
    var validator = new TextFileValidator(textFileCheckerMock.Object, fileName);

    // When
    validator.Validate();

    // Then
    textFileCheckerMock.Verify();
}

[Fact]
public void Validator_returns_0_if_file_is_text()
{
    // Given
    var fileName = string.Empty;
    var textFileCheckerMock = new Mock<ITextFileChecker>();
    textFileCheckerMock.Setup(tfc => tfc.IsTextFile(fileName)).Returns(true);
    var validator = new TextFileValidator(textFileCheckerMock.Object, fileName);

    // When
    var result = validator.Validate();

    // Then
    result.Should().Be(0);
}

[Fact]
public void Validator_returns_2_if_file_does_not_exist()
{
    // Given
    var fileName = string.Empty;
    var textFileCheckerMock = new Mock<ITextFileChecker>();
    textFileCheckerMock.Setup(tfc => tfc.IsTextFile(fileName)).Returns(false);
    var validator = new TextFileValidator(textFileCheckerMock.Object, fileName);

    // When
    var result = validator.Validate();

    // Then
    result.Should().Be(2);
}

Everything passes. Time to evaluate what we’ve done. The two sets of three tests are almost identical, except a few key differences:

  • they both use different interfaces for testing (IFileChecker, ITextFileChecker)
  • they both use different validator classes, but the functionality is identical
  • they return slightly different result codes

I think I can refactor this to clean it up a lot, but I’ll refrain from doing too much. It’s only 3 validators, and they don’t necessarily need a common base interface. The result is, we have 4 tests: one for each return value (0, 1,2 and 3) and one to make sure we actually use the interfaces, using mocks.

Next time, I’ll implement these validator interfaces, make sure the code actually runs and works and talk a bit about dependency injection. Meanwhile, you can download the code in its current state over at my github repo.

Test Driven Development techniques part 1: Console.Write

I decided it’s time I write a post about TDD and I’ve chosen a rather simple problem. I created a new solution named FileReverser and added a couple of projects:

  • FileReverser.App – console application that will hold all the code
  • FileReverser.Facts – library project that will hold all the tests

I opened the App’s Program.cs file and wrote down the requirements:

/*
 * Requirements:
 * 
 * 1. The program should prompt for the input file.
 *      - the message should read "Please enter the full path and name of the input file: "
 * 2. The program should read a line from standard input which represents the full path to the file.
 *      - if the file does not exist, the program should return 1
 *      - if the file is not a text file, the program should return 2
 *      - if the program cannot open or read the file, it should return 3
 *      - the file can be empty, and the program should just output an empty file
 * 3. The program should prompt for the output file.
 *      - the message should read "Please enter the full path and name of the output file: "
 *      - if the file exists, the program should return 4
 *      - if the file cannot be created, the program should return 5
 * 4. The program should reverse the input file into the output file.
 *      - the program should return 0
 */

Since there’s only 4 stories, I won’t take the time to estimate or prioritize them. I’ll just start working on them in the order I defined them.

1. The program should prompt for the input file

Time to start with the first test. I’ll be using xUnit, Moq and FluentAssertions nuget packages. Make sure you install them before continuing. I like writing the first test as pseudo-code and fill in the blanks one by one. The flow is:

  1. Write the test as if you already have the classes/methods you need,
  2. Make it compile (the test should fail though, because otherwise the test isn’t very meaningful; if it doesn’t fail, write a better test),
  3. Make the test work,
  4. Refactor, and
  5. Go back to 1.
[Fact]
public void FileReverser_prompts_for_input_file()
{
    // Given
    var fileReverser = new FileReverser();

    // When
    fileReverser.PromptForInput();

    // Then
    // how do I verify this?
}

Obviously, VS is letting me know I have some errors: there’s no FileReverser class (and obviously no method named PromptForInput), and there’s I don’t have a method to test it.

Let’s create the FileReverser class:

public class FileReverser
{
    public void PromptForInput()
    { }
}

If I simply write Console.WriteLine(“input file”) in the method, I won’t be able to test it without doing some stream magic. I don’t particularly like doing that, and the only way I know of to go around this is to abstract the concept of the console input/output. So, instead of using the System.Console class, I’ll make FileReverser use an interface:

public class FileReverser
{
    private readonly IOutputter _outputter;

    public FileReverser(IOutputter outputter)
    {
        _outputter = outputter;
    }

    public void PromptForInput()
    { }
}

Next, I simply generate the interface and compile. Whoops, the FileReverser constructor needs its outputter param. This is where I can use a mock and finally verify it:

[Fact]
public void FileReverser_prompts_for_input_file()
{
    // Given
    var outputterMock = new Mock<IOutputter>();
    outputterMock.Setup(o => o.Write(string.Empty)).Verifiable();
    var fileReverser = new FileReverser(outputterMock.Object);

    // When
    fileReverser.PromptForInput();

    // Then
    outputterMock.Verify();
}

I run the tests, and sure enough the test fails saying the Write method isn’t called. I add the line _outputter.Write(string.Empty); in the PromptForInput method and the test is green. Great news!

Next, refactoring. There isn’t much obvious bad code around, but Resharper created the FileReverser and IOutputter classes in the same file as my test. I should move them in the App project in their own files. That’ll do for this cycle of refactoring.

Now, there’s still one more thing to do before I can close this story. I need to make sure I’m passing the right text message to the outputter. Since I can’t have both tests work (if I change the outputter line in FileReverser, it won’t write string.Empty), I’ll just change the current test:

[Fact]
public void FileReverser_prompts_for_input_file_with_correct_message()
{
    // Given
    var outputterMock = new Mock<IOutputter>();
    outputterMock.Setup(o => o.Write(App.FileReverser.InputMessage)).Verifiable();
    var fileReverser = new App.FileReverser(outputterMock.Object);

    // When
    fileReverser.PromptForInput();

    // Then
    outputterMock.Verify();
}

And the FileReverser class:

public class FileReverser
{
    private readonly IOutputter _outputter;

    public const string InputMessage = "Please enter the full path and name of the input file: ";

    public FileReverser(IOutputter outputter)
    {
        _outputter = outputter;
    }

    public void PromptForInput()
    {
        _outputter.Write(InputMessage);
    }
}

Great. Story done! … wait. The program doesn’t actually do anything. My Main method is empty, and no test is letting me know I forgot to tie it all up. Moreover, I don’t even have an IOutputter implementation, so I can’t even write it without writing more code.

When writing unit tests, I often find yourself in the position of asking myself: what are the bounds of what I’m testing? You often can’t have 100% test coverage, and even if you can, it’s very expensive to do so. The solution is to define what you are going to test. For now, let’s assume my Main method doesn’t need to be tested, nor is the Outputter implementation. If they get too complicated, I might decide to add an extra layer and test it. For now, let’s implement the Outputter:

public class ConsoleOutputter : IOutputter
{
    public void Write(string message)
    {
        System.Console.Write(message);
    }
}

The Main method is quite simple too:

static void Main()
{
    var outputter = new ConsoleOutputter();
    var fileReverser = new FileReverser(outputter);

    fileReverser.PromptForInput();
}

The test still passes and the program also behaves as it should. So where do we stand?

We’ve implemented the first story. We have a passing test for it. We have defined what we’re going to test and what we’re going to skip testing. Reviewing the code for a bit, I can see a few places that could be improved:

  • the Main method might get too complicated to leave as is, and will probably need some refactoring (probably adding another class/interface)
  • the Main method is in charge of both using the main application class (FileReverser) as well as creating the concrete implementations of our contracts (interfaces); I should use a Dependency Injection framework to take the object creation responsibilities off my Main method
  • the text message is hardcoded; it’s at least a constant string, but it could be extracted into a config file

I won’t make any of the changes above right now, but I’ll keep them in mind and revisit them after the next story is done.

What do you think I could have done better?

You can download the full code for this story over at my github repo.

Unit Testing and EntityFramework

I strongly believe in the benefits Test Driven Development bring to programming. I think it helps a lot with clarifying the requirements, finding edge cases and improving the overall structure and clarity of the code. That being said, I’ve had issues when trying to unit test EntityFramework. I don’t want to hit a real database (and turn my test into an integration test). Whenever I tried to abstract it, I’ve always ran into the problems that arise from using IEnumerable as a stub/mock for IQueryable.

First off, I’ve had problems due to the fact LINQ is implemented using extension methods, and as such it’s not the easiest model to mock (since there’s no actual interface). Secondly, there’s a few key differences between how the the LINQ providers work. For example, consider this select:

Enumerable.Range(10, 20).Select(
                (val, i) => new {Index = i, Value = val});

That’s perfectly valid when using LINQ to Objects but it will throw a runtime exception when sending it to other providers, such as LINQ to SQL. The reason is quite obvious – you can easily get the index from something like a List, but it shouldn’t implicitly do magic behind the scenes to get that from an SQL database.

What it means is, you can’t safely unit test IQueryable with IEnumerable as a stub/mock, which means it’s probably necessary to create an extra layer on top of EntityFramework (for example, a repository abstraction) and either not test it or test it with integration tests. I don’t particularly like either solution, but I’ve had to use them in the past. That is, until recently.

Enter Effort – EntityFramework Unit Testing Tool. Quoting the project’s description, Effort is “an ADO.NET provider that executes all the data operations on a lightweight in-process main memory database instead of a traditional external database”. Basically, Effort gives you a DbConnection that you can use to initialize the DbContext. It’s as simple as that.

Let’s consider a very simple example:

public class TodoMvcEntities : DbContext
{
    public TodoMvcEntities()
    { }

    public TodoMvcEntities(DbConnection connection): 
        base(connection, true)
    { }

    public IDbSet<Task> Tasks { get; set; }
}

Now, let’s say we want to build a GET WebApi for Tasks. We’ll need a TaskService in order to be able to be able to separate the controller tests from the service/database tests:

public class TaskService : ITaskService
{
    private readonly TodoMvcEntities _entities;

    public TaskService(TodoMvcEntities entities)
    {
        _entities = entities;
    }

    public IEnumerable<Task> List()
    {
        return _entities.Tasks.ToList();
    }
}

Then we can simply use this service in our ApiController:

public class TaskController : ApiController
{
    private readonly ITaskService _taskService;

    public TaskController(ITaskService taskService)
    {
        _taskService = taskService;
    }

    public IEnumerable<Task> Get()
    {
        return _taskService.List();
    }
}

However, this post is not about WebApi, so let’s return to our service / EntityFramework testing. Using Effort, we can easily test this method:

public class TaskServiceFacts
{
    private readonly DbConnection _dbConnection;

    public TaskServiceFacts()
    {
        _dbConnection = Effort.DbConnectionFactory.CreateTransient();
        using (var context = new TodoMvcEntities(_dbConnection))
        {
            context.Tasks.Add(new Task
                {
                    Id = 1,
                    Title = "Test task",
                    Content = "Test content"
                });
            context.SaveChanges();
        }
    }

    [Fact]
    public void List_should_get_a_list_of_all_tasks_in_database()
    {
        // Given
        var taskService = new TaskService(new TodoMvcEntities(_dbConnection));

        // When
        var result = taskService.List();

        // Then
        result.Should().HaveCount(1);
    }
}

First, you’ll notice the constructor. I put the initialization code there to simplify my tests. It’s as easy as that, and the tests run fairly fast. If you have several tests that hit Effort, you’ll notice one of them taking around 500-700ms, and then the rest should be around 20-30ms. Probably has to do with initializations and such. It’s not perfect, but it’s certainly not slow enough to stop me from running tests often.

I plan to write a series of posts about TDD with .NET web frameworks. I’ll start with WebApi and EntityFramework, then look into Nancy, ServiceStack, RavenDb, MongoDb and possibly others.

Lambdas and type inference

I’ve come across an interesting problem today. Someone on IRC posted a bit of code that didn’t compile that involved lambdas. At a first glance, it looked like it should compile so I fired up Visual Studio to confirm. Here’s the original code:

using System;
using System.Linq;
using System.Collections.Generic;
 
public class Hello
{
   public static void Main()
   {
     var funs = Enumerable.Range(0, 10).Select(x => y => x + y);
     foreach (var fun in funs)
       Console.WriteLine("{0}", fun(10));
   }
}

Let’s start by going through the code really fast. What we’re trying to do is bind the rightmost lambda (y => x + y) to each of the values in the range. Basically, we should end up with 10 functions, like: y => 0 + y, y => 1 + y, etc. Then we apply them to 10, and we expect the console to write “10, 11, 12…”.

I tried building and the compiler prompted me with an error, “An implicitly typed local variable declaration cannot be initialized with `System.Collections.Generic.IEnumerator<anonymous method>.Current’”. It looked like the type inference didn’t work quite as I’d expect, so I decided to give select a little help: Select<int, Func<int, int>>(x => y => x + y). That fixed the problem and everything runs as expected.

I found this little example interesting, so I decided to dig a bit deeper. There’s two remaining questions: what do you do when you don’t know one of the types (which is the case with anonymous types), and why can’t the compiler deduce the types by itself.

We can easily change the above code to use anonymous types by changing the inner lambda, x => y => new { Result: x + y }. Now what? A quick Google search pointed me to Visual Studio’s UserVoice page with a suggestion that seems to be exactly the issue I was having. The top comment is Paul Wescott’s, proposing an interesting workaround:

class λ 
{ 
    public static Func<TR> Func<TR>(Func<TR> f) { return f; } 
    public static Func<T1, TR> Func<T1, TR>(Func<T1, TR> f) { return f; } 
    public static Func<T1, T2, TR> Func<T1, T2, TR>(Func<T1, T2, TR> f) { return f; } 
    public static Func<T1, T2, T3, TR> Func<T1, T2, T3, TR>(Func<T1, T2, T3, TR> f) { return f; } 
    // etc... 
}

// Your example would be handled with the following usage:

var zip = λ.Func((int a, int b) => new { A = a, B = b });

I created a helper method for my example, and it worked. First problem solved. Time to dig up the C# specification and see whether this is something they missed or whether there’s some reasoning behind this. I couldn’t find anything specific in the standard except 7.15.6, which states “An anonymous function F must always be converted to a delegate type D or an expression tree type E, either directly or through the execution of a delegate creation expression new D(F). This conversion determines the result of the anonymous function”. My best guess is that the compiler doesn’t know whether to transform the inner lambda to a delegate or an expression.

Update: I’ve found a great answer by none other than Eric Lippert which explains the reasoning behind this design decision.