Blog

  • Simple test hosts in .NET using NUnitlite and file-based apps.

    In my previous post I showed how NUnitlite can provide a low-ceremony, minimal footprint option to run NUnit tests. The introduction of file-based apps in .NET 10 allow for this to be taken further. A file-based app no longer requires a project or solution file meaning everything can be run from a single .cs file as if it were a script. This can drastically simplify the setup process, reducing the chances of misconfiguration and allowing more time to be spent focusing on writing the tests themselves.

    Referencing External Test Projects

    The simplest scenario is to have the file-based app serve as a thin test host from which your tests can be run. This can be achieved in just a few lines:

    #:package NUnitlite@4.5.1
    #:project ../FileBasedApps.Tests/FileBasedApps.Tests.csproj
    using NUnitLite;
    using FileBasedApps.Tests;
    // Use NUnitlite to execute the tests
    new AutoRun(typeof(SampleTests).Assembly).Execute(args);

    A file with those contents will:

    1. Load version 4.5.1 of NUnitlite from NuGet
    2. Load a project file from a relative path
    3. Pass the test assembly into NUnitlite and run all discovered tests

    It can be run from the command line with the following command:

    dotnet run .\path-to\script-file.cs

    Defining Inline Tests

    More complex scenarios are also possible, such as having your script reference the system-under-test itself and then defining the tests directly within the C# script. This may be useful if you have a project or library and you’d like to quickly write a test to validate something without the ceremony of defining a full project and solution file. For this scenario, the example from my previous post can be used and modified slightly to load the desired system under test and to then define the test methods inline.

    For example:

    #:package NUnit@4.5.1
    #:package NUnitlite@4.5.1
    #:project ../FileBasedApps.Library/FileBasedApps.Library.csproj
    using NUnitLite;
    using NUnit.Framework;
    using FileBasedApps.Library;
    // Use NUnitlite to execute the tests
    new AutoRun().Execute(args);
    // The tests themselves
    public class SampleTests
    {
    [Test]
    public void Test1()
    {
    Assert.That(new SampleClass().MeaningOfLife, Is.EqualTo(42));
    }
    }

    The key difference here is that no assembly is passed into NUnitlite’s AutoRun constructor. This will cause NUnitlite to instead reference the calling assembly to find tests. This is desirable in this case as the assembly calling NUnitlite will resolve to the assembly generated for our top-level script file where our tests reside, allowing it to “just work” for NUnitlite to find the tests to run.

    Passing Command-line Arguments

    It’s possible to pass options to NUnitlite to customize behaviour in either scenario by passing them on the command line after a -- delimiter. For example, the below command will change the output location of the test results file:

    dotnet run .\path-to\script-file.cs -- --result=CustomResultFile.xml

    A full list of NUnitlite command line options can be found here.

    Summary

    In summary, NUnitlite and the new file-based apps feature in .NET 10 work well together and can provide a very quick and easy way to start writing tests. There are several options to structure your approach and all options can be further configured at runtime by passing command-line arguments.

  • Streamline .NET Testing with NUnitLite

    Modern .NET development offers a wide range of testing frameworks and, separately, test runners. Test frameworks may range from NUnit to XUnit to MSTest or others, while runners may be first-party to each of those or third-party and framework-agnostic. These runners raise the level of abstraction and can offer helpful features, however sometimes it may be that you just want to run tests with as little ceremony as possible. This is where I’ve found NUnitlite to shine.

    NUnitlite works with NUnit to allow your test assembly to become an executable, removing the need for an external runner altogether. This can be helpful in particular if there are limitations or disconnects to installing or updating runners in CI environments.

    For example:

    Suppose a CI test suite is initiated from a dedicated server but the tests must access private or secure resources like a database. You could allow access to the database from the CI server a security-conscious person might consider that to violate the Principle of Least Privilege. That could be fixed by simply configuring the CI server to run the tests from a dedicated machine, but that could also carry the undesirable consequence of installing dedicated test runners on that machine. Allowing tests to run themselves removes both the need to open access to a trusted resource as well as removes needing to install an additional piece of software.

    This can be done in just a few steps:

    • Install NUnitlite to your test project
    <PackageReference Include="NUnitLite" Version="4.5.1" />
    • OPTIONAL: Update your test project to output as an “Exe”
    <OutputType>Exe</OutputType>
    • Add a Program.cs file (if not using single-file applications) and include:
    using NUnitLite;
    return new AutoRun().Execute(args);

    These three steps will turn the test assembly into an executable which can be easily and independently run. More information and examples can be found in the documentation.

  • Including Parameter Names in Data-driven NUnit Tests

    NUnit allows for data-driven parameterized tests and does so by incorporating the data points into the test name as method arguments. For example, something like this:

    [TestCaseSource(nameof(GenerateData))]
    public void MyTest(int cost)
    {
    Assert.Pass();
    }
    private static int[] GenerateData()
    {
    return [14, 140 1400];
    }

    Will translate into three distinct tests named (by default):

    • “MyTest(14)”
    • “MyTest(140)”
    • “MyTest(1400)”

    This is concise but it’s not clear what the data points represent, especially if the test runs or reports are being reviewed by someone unfamiliar with the test implementation. It would be clearer if the parameter name could be included so that it’s clear what the number represents. Ex:

    • “MyTest(cost: 14)”
    • “MyTest(cost: 140)”
    • “MyTest(cost: 1400)”

    The argument display in the test name can be customized if desired by updating the source of the data to return TestCaseData instances instead of the raw underlying data. The TestCaseData class supports a few different ways to include the parameter name in the test method. Each of the below are built-in ways to do that.

    [TestCaseSource(nameof(GenerateData))]
    public void MyTest(int cost)
    {
    Assert.Pass();
    }
    private static TestCaseData[] GenerateData()
    {
    return [
    new TestCaseData(14).SetArgDisplayNames("cost: 14"),
    new TestCaseData(140).SetName("{m}{a}"),
    new TestCaseData(1400).SetName("{m}{p}")
    ];
    }

    One downside of this is they must specified separately for each test case, and in some cases even duplicated within a single test case. This leaves the possibility of inconsistencies over time. While NUnit doesn’t support a centralized way to support this formatting today, a custom attribute could be developed to help. For example, something like this can be applied as desired to individual method parameters to format them. Note that it requires both a parameter-level attribute as well as a custom test case source attribute.

    Usage:

    [CustomTestCaseSource(nameof(GenerateData))]
    public void MyTest([IncludeParamName]int cost)
    {
    Assert.Pass();
    }
    private static TestCaseData[] GenerateData()
    {
    return [
    new TestCaseData(14),
    new TestCaseData(140).SetArgDisplayNames("19"),
    // Setting the name here will cause the parameter-level pretty printing to get skipped
    new TestCaseData(1400).SetName("{m}{p}")
    ];
    }

    Implementation:

    [AttributeUsage(AttributeTargets.Parameter, AllowMultiple = false)]
    public class IncludeParamNameAttribute : Attribute { }
    public class CustomTestCaseSource : TestCaseSourceAttribute, ITestBuilder
    {
    private readonly NUnitTestCaseBuilder _builder = new();
    private const BindingFlags PrivateInternalBinding = BindingFlags.NonPublic | BindingFlags.Instance;
    private static readonly MethodInfo? GetTestCasesForMethod = typeof(TestCaseSourceAttribute)
    .GetMethod("GetTestCasesFor",
    PrivateInternalBinding,
    [typeof(IMethodInfo)]
    );
    public CustomTestCaseSource(string source) : base(source) { }
    IEnumerable<TestMethod> ITestBuilder.BuildFrom(IMethodInfo method, Test? suite)
    {
    var GetTestCasesFor = GetTestCasesForMethod!.CreateDelegate<Func<IMethodInfo, IEnumerable<ITestCaseData>>>(this);
    var argDisplayNamesProperty = typeof(NUnit.Framework.Internal.TestParameters)
    .GetProperty("ArgDisplayNames", PrivateInternalBinding);
    var methodParams = method.MethodInfo.GetParameters();
    var paramNameDisplayMap = methodParams.Select(o => o.GetCustomAttribute<IncludeParamNameAttribute>() is not null).ToArray();
    var doShowParameterNames = paramNameDisplayMap.Any(o => o);
    int count = 0;
    foreach (ITestCaseData parms in GetTestCasesFor(method))
    {
    // Check if TestName is set or not as ArgDisplayNames can not be used if TestName is set
    if (doShowParameterNames && parms.TestName is null && parms is TestCaseData tcd)
    {
    var displayNames = (string[]?)argDisplayNamesProperty?.GetValue(tcd) ?? new string[methodParams.Length];
    var hasChangedNames = false;
    for(var i = 0; i < displayNames.Length; i++)
    {
    var displayName = displayNames[i] ?? parms.Arguments[i]?.ToString() ?? "null";
    if (paramNameDisplayMap[i])
    {
    displayNames[i] = $"{methodParams[i].Name}: {displayName}";
    hasChangedNames = true;
    }
    else
    {
    displayNames[i] = displayName;
    }
    }
    if (hasChangedNames)
    {
    argDisplayNamesProperty?.SetValue(tcd, displayNames);
    }
    }
    count++;
    yield return _builder.BuildTestMethod(method, suite, (TestCaseParameters)parms);
    }
    // BELOW WAS COPIED FROM THE BUILT-IN TESTCASESOURCE IMPLEMENTATION
    // -----------------------------
    // If count > 0, error messages will be shown for each case
    // but if it's 0, we need to add an extra "test" to show the message.
    if (count == 0 && method.GetParameters().Length == 0)
    {
    var parms = new TestCaseParameters();
    parms.RunState = RunState.NotRunnable;
    parms.Properties.Set(PropertyNames.SkipReason, "TestCaseSourceAttribute may not be used on a method without parameters");
    yield return _builder.BuildTestMethod(method, suite, parms);
    }
    }
    }

    Disclaimer: This was written as a PoC for a one-off purpose and may still have a few edge cases not accounted for but it should be stable enough for the majority of cases. Enjoy!

  • Efficiently URL-Encoding a Hashed Value in .NET

    I sometimes find I need to pass around a data in a URL. Doing this succinctly and/or securely may mean I need to hash or encrypt the data to pass around a much shorter value. Hashing and encryption in turn each operate on binary data, which means I first have to convert my data into bytes. Binary data can’t be included in a URL resulting in another conversion, typically to encode the bytes back into a string-based format such as Base64.

    All of these operations have an overhead. .NET provides each of the different building blocks for these operations but there’s no one-shot method to do it all. Further, while each of the operations the .NET base class library (BCL) are highly optimized, there is inevitably some performance lost if I were to use some code like this:

    static string HashToBase64String(Encoding encoding, string input)
    {
    byte[] bytes = encoding.GetBytes(input);
    byte[] hash = SHA256.HashData(bytes);
    return Base64Url.EncodeToString(hash);
    }

    The biggest performance culprit is the call to GetBytes(). It must allocate an entire new array just to be able to immediately then transform that data again and discard the original bytes. This can add up, especially if data is a rather large string. It would be nice if we could pool those arrays, or better yet to avoid it altogether for smaller strings.

    As a result, I have a helper snippet I find I often resort to and modify as cases come up which outperforms the above implementation in both speed and memory consumption when run through benchmarks of varying sized strings.

    static string HashToBase64String(Encoding encoding, string input)
    {
    const int StackAllocThreshold = 256;
    int maxByteCount = encoding.GetMaxByteCount(input.Length);
    byte[]? pooledBuffer = null;
    Span<byte> buffer = maxByteCount <= StackAllocThreshold
    ? stackalloc byte[StackAllocThreshold]
    : (pooledBuffer = ArrayPool<byte>.Shared.Rent(maxByteCount));
    try
    {
    int byteCount = encoding.GetBytes(input, buffer);
    Span<byte> hash = stackalloc byte[SHA256.HashSizeInBytes];
    SHA256.HashData(buffer[..byteCount], hash);
    return Base64Url.EncodeToString(hash);
    }
    finally
    {
    if (pooledBuffer is not null)
    {
    ArrayPool<byte>.Shared.Return(pooledBuffer);
    }
    }
    }

    This takes advantage of a few things:

    • Prefer Span<byte> over byte[] where possible
    • Use stackalloc where possible
    • Pool buffers where we can’t use stackalloc

    It could be taken further through other features such as SkipLocalsInit or fixed, both of which would require the /unsafe flag to compile. The benchmarks without this already give a nice boost, in particular by reducing memory overhead on larger payloads.

    | Method | Length | Utf8 | Mean | Ratio | Gen0 | Allocated | Alloc Ratio |
    |---------- |------- |------ |---------:|------:|-------:|----------:|------------:|
    | Simple | 100 | False | 197.2 ns | 1.00 | 0.0470 | 296 B | 1.00 |
    | Optimized | 100 | False | 164.9 ns | 0.84 | 0.0176 | 112 B | 0.38 |
    | | | | | | | | |
    | Simple | 100 | True | 191.8 ns | 1.00 | 0.0470 | 296 B | 1.00 |
    | Optimized | 100 | True | 177.5 ns | 0.93 | 0.0176 | 112 B | 0.38 |
    | | | | | | | | |
    | Simple | 1000 | False | 615.2 ns | 1.00 | 0.1898 | 1192 B | 1.00 |
    | Optimized | 1000 | False | 582.6 ns | 0.95 | 0.0172 | 112 B | 0.09 |
    | | | | | | | | |
    | Simple | 1000 | True | 635.0 ns | 1.00 | 0.1898 | 1192 B | 1.00 |
    | Optimized | 1000 | True | 584.4 ns | 0.92 | 0.0172 | 112 B | 0.09 |
  • Writing Custom Constraints in NUnit

    NUnit supports several ways to assert against data. The recommended one, constraint-based, allows for a more naturally-read syntax which supports the flexible chaining of conditions to prove simple or complex facets about the system.

    It follows the “Assert that” format:

    Assert.That(actualValue, Is.EqualTo(expectedValue));
    

    With the following components (listed left to right):

    • actualValue : The actual output of the system under test which you wish to validate
    • Is : A starting clause. The most common is Is but NUnit also defines Has, Does, and others to allow for readable tests
    • EqualTo() : A example function which returns the contraint which validates your data. EqualTo returns an instance of an EqualConstraint class which will internally contains validation logic. Other examples include LessThan() or Even.
    • expectedValue : The data to compare actualValue to using the rules defined by the constraint.

    NUnit also contains built-in operators. For example, checking inequality is a matter to prepending the Not operator in front of EqualTo():

    Assert.That(actualValue, Is.Not.EqualTo(expectedValue));
    

    Similarly, if a situation requires check a characteristic of a value instead of comparing then a unary constraint like Even could be used:

    Assert.That(actualValue, Is.Even);
    

    The built-in constraints will likely meet 99.9% of use cases, but there may be some domain-specific rules which aren’t covered out-of-the-box. For example, a math-oriented program may wish to validate that a number is prime. It would be very nice if a test could be written to read:

    Assert.That(actualValue, Is.Prime);
    

    NUnit supports this through custom constraints. Custom constraints are classes which extend NUnit’s own Constraint class.

    A PrimeConstraint may look like:

    public class PrimeConstraint : Constraint
    {
        public override string Description => "A prime value";
        public override ConstraintResult ApplyTo<TActual>(TActual actualValue)
        {
            var actualInt = Convert.ToInt32(actualValue);
            ArgumentOutOfRangeException.ThrowIfLessThanOrEqual(actualInt, 0, nameof(actualInt));
            for (int i = 2; i <= (int)Math.Sqrt(actualInt); i++)
            {
                if (actualInt % i == 0)
                {
                    // Not prime if we've found an evenly divisible factor
                    return new ConstraintResult(this, actualValue, false);
                }
            }
            return new ConstraintResult(this, actualValue, true);
        }
    }
    

    In addition to extending from the Constraint class, the class must implement an ApplyTo<TActual>() method which will validate the actualValue originally passed into Assert.That(). Hooking this into the NUnit syntax tree is then very easy thanks to the new C# 14 extension members feature. Adding a new function onto NUnit’s static Is class and adding a new property onto the ConstraintExpression class can be achieved in one line each.

    public static class ConstraintExtensions
    {
        extension(NUnit.Framework.Is)
        {
            public static Constraint Prime => new PrimeConstraint();
        }
        extension(ConstraintExpression expression)
        {
            public Constraint Prime => expression.Append(new PrimeConstraint());
        }
    }
    

    And that’s it. They can be used in tests seamlessly afterwards as if they were part of NUnit itself.

    [Test]
    public void Test1()
    {
        Assert.That(5, Is.Prime);
        Assert.That(4, Is.Not.Prime);
    }
    

  • Using Cooperative Cancellation in long-running tests

    NUnit 4 has added support for cooperatively cancelling long-running tests in several scenarios. Cooperative cancellation is the preferred way to end any long-running operation in .NET as it allows for the graceful ending of operations, however it requires explicit coordination by calling code to do this. This coordination is handled, in part, by the passing of a CancellationToken into the potentially long-running method. The cancellation token can be signaled to a “cancelled” state to allow the long-running operation to react and gracefully end itself.

    For example, the below code which must get an external resource over HTTP where a cancellationToken is passed in as the last parameter. This GetAsync() method will exit early when the operation is cancelled.

            var httpClient = new HttpClient();
            await httpClient.GetAsync("https://server", cancellationToken);
    

    But where does this cancellationToken come from? It’s possible to construct the token and manage it from a CancellationTokenSource yourself, however frameworks will often have a mechanism to do this for you.

    NUnit supports cooperative cancellation in a few ways, the simplest of which is through the CancelAfter attribute. This attribute will indicate to NUnit that it should manage a cancellation token on behalf of the test. The cancellation token itself can be used by the test in one of two ways:

    Read from the test context:

            [Test]
            [CancelAfter(CooperativeTimeoutMilliseconds)]
            public async Task WithCooperativeCancellation_Context()
            {
                var delay = TimeSpan.FromMilliseconds(600);
                var timer = Stopwatch.StartNew();
                await Task.Delay(delay, TestContext.CurrentContext.CancellationToken);
                timer.Stop();
    
                var expectedDelay = Math.Min(delay.Milliseconds, CooperativeTimeoutMilliseconds);
    
                Assert.That(timer.ElapsedMilliseconds, Is.EqualTo(expectedDelay).Within(50));
            }
    

    Passed by NUnit as an argument into the method:

            [Test]
            [CancelAfter(CooperativeTimeoutMilliseconds)]
            public async Task WithCooperativeCancellation_Argument(CancellationToken cancellationToken)
            {
                var delay = TimeSpan.FromMilliseconds(600);
                var timer = Stopwatch.StartNew();
                await Task.Delay(delay, cancellationToken);
                timer.Stop();
    
                var expectedDelay = Math.Min(delay.Milliseconds, CooperativeTimeoutMilliseconds);
    
                Assert.That(timer.ElapsedMilliseconds, Is.EqualTo(expectedDelay).Within(50));
            }
    

    Both conventions are supported by NUnit and will allow a long-running test to respond to and gracefully cancel any long-running tasks in flight.

  • Retrying Tests on Exception in NUnit

    Tests, especially unit tests, should be reliable and reproducible. System or integration tests can, however, exercise many different parts of a codebase or even a network. This increase in the number of moving pieces can potentially lead to decreased test reliability. NUnit’s Retry attribute was added to support cases where a developer may believe a test could have a reasonable chance at passing if retried after an initial failure.

    For example, the below test will fail about half the time it is run, but it will be run up to 3 times before being finally marked as failed in the run.

    [Test, Retry(3)]
    public static void TestRandomlyEven()
    {
        Assert.That(Random.Shared.Next(), Is.Even);
    }
    

    These retried failures, however, have only been for when a test fails an assertion. Test failures due to unhandled exceptions are not retried. So the below test will also fail about half the time it is run, but those failures will not be retried and the test will immediately be treated as failed.

    [Test, Retry(3)]
    public static void TestRandomlyEven()
    {
        if (Random.Shared.Next() % 2 == 0)
            throw new InvalidOperationException("Odd number.");
        Assert.Pass();
    }
    

    This may be desirable based on how a test is written and if you want to be alerted to any instability or potential flakiness. There may be other cases, such as detailed system integration tests, where one may want network or database exceptions to be retried. Writing a custom attribute to support this is quite easy as most of the pieces are already exposed as public types within NUnit. The below was written against NUnit 4.4 but should work on any lower version too.

    [Test, RetryOnExceptionAttribute(3)]
    public static void TestRandomlyEven()
    {
        if (Random.Shared.Next() % 2 == 0)
            throw new InvalidOperationException("Odd number.");
        Assert.Pass();
    }
    
    public class RetryOnExceptionAttribute(int tryCount) : NUnitAttribute, IRepeatTest
    {
        public TestCommand Wrap(TestCommand command)
        {
            return new RetryAttribute.RetryCommand(new FailOnExceptionCommand(command), tryCount);
        }
    
        private class FailOnExceptionCommand(TestCommand innerCommand) : DelegatingTestCommand(innerCommand)
        {
            public override TestResult Execute(TestExecutionContext context)
            {
                try
                {
                    return innerCommand.Execute(context);
                }
                catch (Exception ex)
                {
                    context.CurrentResult.SetResult(ResultState.Failure, ex.Message, ex.StackTrace);
                    context.CurrentResult.RecordTestCompletion();
                    return context.CurrentResult;
                }
            }
        }
    }
    

    Fortunately the ability to retry on specific exceptions will be available as part of NUnit 4.5. A new RetryExceptions property will be added to the attribute that can be given an array of Types of exceptions to retry.

    [Test, Retry(3, RetryExceptions = [typeof(InvalidOperationException))]
    public static void TestRandomlyEven()
    {
        if (Random.Shared.Next() % 2 == 0)
            throw new InvalidOperationException("Odd number.");
        Assert.Pass();
    }
    

    The entries are treated as base classes so retrying all exceptions will become as simple as:

    [Test, Retry(3, RetryExceptions = [typeof(InvalidOperationException))]
    public static void TestRandomlyEven()
    {
        if (Random.Shared.Next() % 2 == 0)
            throw new InvalidOperationException("Odd number.");
        Assert.Pass();
    }
    

    A big thank you to manfred-brands for having added this feature recently.

  • Async Enumerable Test Sources in NUnit

    In my previous post I showed how to use awaitable TestCase or Value sources in NUnit 3.14. NUnit 4 continues the story of adding async support by also allowing TestCaseSource, ValueSource, or TestFixtureSource to return an IAsyncEnumerable.

    public class AsyncTestSourcesTests
    {
        [TestCaseSource(nameof(MyMethodAsync))]
        public void Test1Async(MyClass item)
        {
        }
    
        public static async IAsyncEnumerable<MyClass> MyMethodAsync()
        {
            using var file = File.OpenRead("Path/To/data.json");
            await foreach (var item in JsonSerializer.DeserializeAsyncEnumerable<MyClass>(file))
            {
                yield return item;
            }
        }
    
        public class MyClass
        {
            public int Foo { get; set; }
            public int Bar { get; set; }
        }
    }
    

    As with IEnumerable-backed sources, NUnit will lazily enumerate the collection to avoid bringing all the objects into memory at once, instead only generating the test or value sources when needed.

    Many async enumerable operations also require async disposal of the underlying resource after enumerating the collection. NUnit will also take care of calling the dispose method to ensure everything is cleaned up properly. In the event that both DisposeAsync and Dispose methods are present then NUnit will only call the asynchronous DisposeAsync method and not the Dispose method.

  • Async Test Sources in NUnit

    NUnit has long supported the definition of test cases in numerous forms, including via inline primitive data via the TestCaseAttribute or potentially more complex data returned from a method, property, or other source at runtime via TestCaseSourceAttribute. The latter has typically only supported synchronous methods. This complicated defining data-driven test cases where an internal operation required calling a Task-based API. A common example of this could be a test case which reads from a JSON source file or other stream using a method like JsonSerializer.DeserializeAsync().

    In the past this would mean an awkward and unnatural call using something like .GetAwaiter().GetResult():

    public class Tests
    {
        [TestCaseSource(nameof(MyMethod))]
        public void Test1(MyClass item)
        {
        }
    
        public static IEnumerable<MyClass> MyMethod()
        {
            using var file = File.OpenRead("Path/To/data.json");
            var t = JsonSerializer.DeserializeAsync<IEnumerable<MyClass>>(file).AsTask();
    
            return t.GetAwaiter().GetResult();
        }
    }
    

    NUnit 3.14 was released a few months ago and included support for “async” or task-based test case sources. Now a TestCaseSource can target a Task-returning method to allow for much more natural code:

    public class Tests
    {
        [TestCaseSource(nameof(MyMethodAsync))]
        public void Test1Async(MyClass item)
        {
        }
    
        public static async Task<IEnumerable<MyClass>> MyMethodAsync()
        {
            using var file = File.OpenRead("Path/To/data.json");
            return await JsonSerializer.DeserializeAsync<IEnumerable<MyClass>>(file);
        }
    }
    

    The above example focuses on Task, but any awaitable type such as ValueTask or a custom awaitable also works. Other “source” attributes such as TestFixtureSource or ValueSource are supported as well.

  • Creating a Grunt plugin on Windows in MINGW

    I’ve been working with grunt a bit lately, and have found the need for a task that there doesn’t seem to be in the expansive list of existing plugins. So I thought I’d create my own. Fortunately, grunt has a great and simple page on doing this (http://gruntjs.com/creating-plugins).

    Unfortunately, I ran into some issues.

    I usually like to work in the Git Bash shell that comes with MINGW. Trouble is, this was causing some pathing issues. Specifically, with these two commands:

    1. Install the gruntplugin template with git clone git://github.com/gruntjs/grunt-init-gruntplugin.git ~/.grunt-init/gruntplugin(%USERPROFILE%\.grunt-init\gruntplugin on Windows).
    2. Run grunt-init gruntplugin in an empty directory.

    Apparently MINGW, or at least my version, has some issues resolving %USERPROFILE%. So I ended up with a cloned git repo in my local directory called %USERPROFILE%.grunt-initgruntplugin. After fixing that and moving it to my root user profile I kept geting an “EINVAL” issue on the next command. I figured this had to be a pathing issue too. So I dropped out of MINGW into a cmd shell by typing cmd. Except that didn’t quite do it. Maybe MINGW was intercepting input and filtering it in an unexpected way, but my problems became even worse. So my fix?:

    1. Add git to your OS Path variable (C:\Program Files\Git\bin)
    2. Run a regular command shell (cmd outside of MINGW)

    With those two small changes, everything worked flawlessly.