Blog

  • Including Parameter Names in Data-driven NUnit Tests

    NUnit allows for data-driven parameterized tests and does so by incorporating the data points into the test name as method arguments. For example, something like this:

    [TestCaseSource(nameof(GenerateData))]
    public void MyTest(int cost)
    {
    Assert.Pass();
    }
    private static int[] GenerateData()
    {
    return [14, 140 1400];
    }

    Will translate into three distinct tests named (by default):

    • “MyTest(14)”
    • “MyTest(140)”
    • “MyTest(1400)”

    This is concise but it’s not clear what the data points represent, especially if the test runs or reports are being reviewed by someone unfamiliar with the test implementation. It would be clearer if the parameter name could be included so that it’s clear what the number represents. Ex:

    • “MyTest(cost: 14)”
    • “MyTest(cost: 140)”
    • “MyTest(cost: 1400)”

    The argument display in the test name can be customized if desired by updating the source of the data to return TestCaseData instances instead of the raw underlying data. The TestCaseData class supports a few different ways to include the parameter name in the test method. Each of the below are built-in ways to do that.

    [TestCaseSource(nameof(GenerateData))]
    public void MyTest(int cost)
    {
    Assert.Pass();
    }
    private static TestCaseData[] GenerateData()
    {
    return [
    new TestCaseData(14).SetArgDisplayNames("cost: 14"),
    new TestCaseData(140).SetName("{m}{a}"),
    new TestCaseData(1400).SetName("{m}{p}")
    ];
    }

    One downside of this is they must specified separately for each test case, and in some cases even duplicated within a single test case. This leaves the possibility of inconsistencies over time. While NUnit doesn’t support a centralized way to support this formatting today, a custom attribute could be developed to help. For example, something like this can be applied as desired to individual method parameters to format them. Note that it requires both a parameter-level attribute as well as a custom test case source attribute.

    Usage:

    [CustomTestCaseSource(nameof(GenerateData))]
    public void MyTest([IncludeParamName]int cost)
    {
    Assert.Pass();
    }
    private static TestCaseData[] GenerateData()
    {
    return [
    new TestCaseData(14),
    new TestCaseData(140).SetArgDisplayNames("19"),
    // Setting the name here will cause the parameter-level pretty printing to get skipped
    new TestCaseData(1400).SetName("{m}{p}")
    ];
    }

    Implementation:

    [AttributeUsage(AttributeTargets.Parameter, AllowMultiple = false)]
    public class IncludeParamNameAttribute : Attribute { }
    public class CustomTestCaseSource : TestCaseSourceAttribute, ITestBuilder
    {
    private readonly NUnitTestCaseBuilder _builder = new();
    private const BindingFlags PrivateInternalBinding = BindingFlags.NonPublic | BindingFlags.Instance;
    private static readonly MethodInfo? GetTestCasesForMethod = typeof(TestCaseSourceAttribute)
    .GetMethod("GetTestCasesFor",
    PrivateInternalBinding,
    [typeof(IMethodInfo)]
    );
    public CustomTestCaseSource(string source) : base(source) { }
    IEnumerable<TestMethod> ITestBuilder.BuildFrom(IMethodInfo method, Test? suite)
    {
    var GetTestCasesFor = GetTestCasesForMethod!.CreateDelegate<Func<IMethodInfo, IEnumerable<ITestCaseData>>>(this);
    var argDisplayNamesProperty = typeof(NUnit.Framework.Internal.TestParameters)
    .GetProperty("ArgDisplayNames", PrivateInternalBinding);
    var methodParams = method.MethodInfo.GetParameters();
    var paramNameDisplayMap = methodParams.Select(o => o.GetCustomAttribute<IncludeParamNameAttribute>() is not null).ToArray();
    var doShowParameterNames = paramNameDisplayMap.Any(o => o);
    int count = 0;
    foreach (ITestCaseData parms in GetTestCasesFor(method))
    {
    // Check if TestName is set or not as ArgDisplayNames can not be used if TestName is set
    if (doShowParameterNames && parms.TestName is null && parms is TestCaseData tcd)
    {
    var displayNames = (string[]?)argDisplayNamesProperty?.GetValue(tcd) ?? new string[methodParams.Length];
    var hasChangedNames = false;
    for(var i = 0; i < displayNames.Length; i++)
    {
    var displayName = displayNames[i] ?? parms.Arguments[i]?.ToString() ?? "null";
    if (paramNameDisplayMap[i])
    {
    displayNames[i] = $"{methodParams[i].Name}: {displayName}";
    hasChangedNames = true;
    }
    else
    {
    displayNames[i] = displayName;
    }
    }
    if (hasChangedNames)
    {
    argDisplayNamesProperty?.SetValue(tcd, displayNames);
    }
    }
    count++;
    yield return _builder.BuildTestMethod(method, suite, (TestCaseParameters)parms);
    }
    // BELOW WAS COPIED FROM THE BUILT-IN TESTCASESOURCE IMPLEMENTATION
    // -----------------------------
    // If count > 0, error messages will be shown for each case
    // but if it's 0, we need to add an extra "test" to show the message.
    if (count == 0 && method.GetParameters().Length == 0)
    {
    var parms = new TestCaseParameters();
    parms.RunState = RunState.NotRunnable;
    parms.Properties.Set(PropertyNames.SkipReason, "TestCaseSourceAttribute may not be used on a method without parameters");
    yield return _builder.BuildTestMethod(method, suite, parms);
    }
    }
    }

    Disclaimer: This was written as a PoC for a one-off purpose and may still have a few edge cases not accounted for but it should be stable enough for the majority of cases. Enjoy!

  • Efficiently URL-Encoding a Hashed Value in .NET

    I sometimes find I need to pass around a data in a URL. Doing this succinctly and/or securely may mean I need to hash or encrypt the data to pass around a much shorter value. Hashing and encryption in turn each operate on binary data, which means I first have to convert my data into bytes. Binary data can’t be included in a URL resulting in another conversion, typically to encode the bytes back into a string-based format such as Base64.

    All of these operations have an overhead. .NET provides each of the different building blocks for these operations but there’s no one-shot method to do it all. Further, while each of the operations the .NET base class library (BCL) are highly optimized, there is inevitably some performance lost if I were to use some code like this:

    static string HashToBase64String(Encoding encoding, string input)
    {
    byte[] bytes = encoding.GetBytes(input);
    byte[] hash = SHA256.HashData(bytes);
    return Base64Url.EncodeToString(hash);
    }

    The biggest performance culprit is the call to GetBytes(). It must allocate an entire new array just to be able to immediately then transform that data again and discard the original bytes. This can add up, especially if data is a rather large string. It would be nice if we could pool those arrays, or better yet to avoid it altogether for smaller strings.

    As a result, I have a helper snippet I find I often resort to and modify as cases come up which outperforms the above implementation in both speed and memory consumption when run through benchmarks of varying sized strings.

    static string HashToBase64String(Encoding encoding, string input)
    {
    const int StackAllocThreshold = 256;
    int maxByteCount = encoding.GetMaxByteCount(input.Length);
    byte[]? pooledBuffer = null;
    Span<byte> buffer = maxByteCount <= StackAllocThreshold
    ? stackalloc byte[StackAllocThreshold]
    : (pooledBuffer = ArrayPool<byte>.Shared.Rent(maxByteCount));
    try
    {
    int byteCount = encoding.GetBytes(input, buffer);
    Span<byte> hash = stackalloc byte[SHA256.HashSizeInBytes];
    SHA256.HashData(buffer[..byteCount], hash);
    return Base64Url.EncodeToString(hash);
    }
    finally
    {
    if (pooledBuffer is not null)
    {
    ArrayPool<byte>.Shared.Return(pooledBuffer);
    }
    }
    }

    This takes advantage of a few things:

    • Prefer Span<byte> over byte[] where possible
    • Use stackalloc where possible
    • Pool buffers where we can’t use stackalloc

    It could be taken further through other features such as SkipLocalsInit or fixed, both of which would require the /unsafe flag to compile. The benchmarks without this already give a nice boost, in particular by reducing memory overhead on larger payloads.

    | Method | Length | Utf8 | Mean | Ratio | Gen0 | Allocated | Alloc Ratio |
    |---------- |------- |------ |---------:|------:|-------:|----------:|------------:|
    | Simple | 100 | False | 197.2 ns | 1.00 | 0.0470 | 296 B | 1.00 |
    | Optimized | 100 | False | 164.9 ns | 0.84 | 0.0176 | 112 B | 0.38 |
    | | | | | | | | |
    | Simple | 100 | True | 191.8 ns | 1.00 | 0.0470 | 296 B | 1.00 |
    | Optimized | 100 | True | 177.5 ns | 0.93 | 0.0176 | 112 B | 0.38 |
    | | | | | | | | |
    | Simple | 1000 | False | 615.2 ns | 1.00 | 0.1898 | 1192 B | 1.00 |
    | Optimized | 1000 | False | 582.6 ns | 0.95 | 0.0172 | 112 B | 0.09 |
    | | | | | | | | |
    | Simple | 1000 | True | 635.0 ns | 1.00 | 0.1898 | 1192 B | 1.00 |
    | Optimized | 1000 | True | 584.4 ns | 0.92 | 0.0172 | 112 B | 0.09 |
  • Writing Custom Constraints in NUnit

    NUnit supports several ways to assert against data. The recommended one, constraint-based, allows for a more naturally-read syntax which supports the flexible chaining of conditions to prove simple or complex facets about the system.

    It follows the “Assert that” format:

    Assert.That(actualValue, Is.EqualTo(expectedValue));
    

    With the following components (listed left to right):

    • actualValue : The actual output of the system under test which you wish to validate
    • Is : A starting clause. The most common is Is but NUnit also defines Has, Does, and others to allow for readable tests
    • EqualTo() : A example function which returns the contraint which validates your data. EqualTo returns an instance of an EqualConstraint class which will internally contains validation logic. Other examples include LessThan() or Even.
    • expectedValue : The data to compare actualValue to using the rules defined by the constraint.

    NUnit also contains built-in operators. For example, checking inequality is a matter to prepending the Not operator in front of EqualTo():

    Assert.That(actualValue, Is.Not.EqualTo(expectedValue));
    

    Similarly, if a situation requires check a characteristic of a value instead of comparing then a unary constraint like Even could be used:

    Assert.That(actualValue, Is.Even);
    

    The built-in constraints will likely meet 99.9% of use cases, but there may be some domain-specific rules which aren’t covered out-of-the-box. For example, a math-oriented program may wish to validate that a number is prime. It would be very nice if a test could be written to read:

    Assert.That(actualValue, Is.Prime);
    

    NUnit supports this through custom constraints. Custom constraints are classes which extend NUnit’s own Constraint class.

    A PrimeConstraint may look like:

    public class PrimeConstraint : Constraint
    {
        public override string Description => "A prime value";
        public override ConstraintResult ApplyTo<TActual>(TActual actualValue)
        {
            var actualInt = Convert.ToInt32(actualValue);
            ArgumentOutOfRangeException.ThrowIfLessThanOrEqual(actualInt, 0, nameof(actualInt));
            for (int i = 2; i <= (int)Math.Sqrt(actualInt); i++)
            {
                if (actualInt % i == 0)
                {
                    // Not prime if we've found an evenly divisible factor
                    return new ConstraintResult(this, actualValue, false);
                }
            }
            return new ConstraintResult(this, actualValue, true);
        }
    }
    

    In addition to extending from the Constraint class, the class must implement an ApplyTo<TActual>() method which will validate the actualValue originally passed into Assert.That(). Hooking this into the NUnit syntax tree is then very easy thanks to the new C# 14 extension members feature. Adding a new function onto NUnit’s static Is class and adding a new property onto the ConstraintExpression class can be achieved in one line each.

    public static class ConstraintExtensions
    {
        extension(NUnit.Framework.Is)
        {
            public static Constraint Prime => new PrimeConstraint();
        }
        extension(ConstraintExpression expression)
        {
            public Constraint Prime => expression.Append(new PrimeConstraint());
        }
    }
    

    And that’s it. They can be used in tests seamlessly afterwards as if they were part of NUnit itself.

    [Test]
    public void Test1()
    {
        Assert.That(5, Is.Prime);
        Assert.That(4, Is.Not.Prime);
    }
    

  • Using Cooperative Cancellation in long-running tests

    NUnit 4 has added support for cooperatively cancelling long-running tests in several scenarios. Cooperative cancellation is the preferred way to end any long-running operation in .NET as it allows for the graceful ending of operations, however it requires explicit coordination by calling code to do this. This coordination is handled, in part, by the passing of a CancellationToken into the potentially long-running method. The cancellation token can be signaled to a “cancelled” state to allow the long-running operation to react and gracefully end itself.

    For example, the below code which must get an external resource over HTTP where a cancellationToken is passed in as the last parameter. This GetAsync() method will exit early when the operation is cancelled.

            var httpClient = new HttpClient();
            await httpClient.GetAsync("https://server", cancellationToken);
    

    But where does this cancellationToken come from? It’s possible to construct the token and manage it from a CancellationTokenSource yourself, however frameworks will often have a mechanism to do this for you.

    NUnit supports cooperative cancellation in a few ways, the simplest of which is through the CancelAfter attribute. This attribute will indicate to NUnit that it should manage a cancellation token on behalf of the test. The cancellation token itself can be used by the test in one of two ways:

    Read from the test context:

            [Test]
            [CancelAfter(CooperativeTimeoutMilliseconds)]
            public async Task WithCooperativeCancellation_Context()
            {
                var delay = TimeSpan.FromMilliseconds(600);
                var timer = Stopwatch.StartNew();
                await Task.Delay(delay, TestContext.CurrentContext.CancellationToken);
                timer.Stop();
    
                var expectedDelay = Math.Min(delay.Milliseconds, CooperativeTimeoutMilliseconds);
    
                Assert.That(timer.ElapsedMilliseconds, Is.EqualTo(expectedDelay).Within(50));
            }
    

    Passed by NUnit as an argument into the method:

            [Test]
            [CancelAfter(CooperativeTimeoutMilliseconds)]
            public async Task WithCooperativeCancellation_Argument(CancellationToken cancellationToken)
            {
                var delay = TimeSpan.FromMilliseconds(600);
                var timer = Stopwatch.StartNew();
                await Task.Delay(delay, cancellationToken);
                timer.Stop();
    
                var expectedDelay = Math.Min(delay.Milliseconds, CooperativeTimeoutMilliseconds);
    
                Assert.That(timer.ElapsedMilliseconds, Is.EqualTo(expectedDelay).Within(50));
            }
    

    Both conventions are supported by NUnit and will allow a long-running test to respond to and gracefully cancel any long-running tasks in flight.

  • Retrying Tests on Exception in NUnit

    Tests, especially unit tests, should be reliable and reproducible. System or integration tests can, however, exercise many different parts of a codebase or even a network. This increase in the number of moving pieces can potentially lead to decreased test reliability. NUnit’s Retry attribute was added to support cases where a developer may believe a test could have a reasonable chance at passing if retried after an initial failure.

    For example, the below test will fail about half the time it is run, but it will be run up to 3 times before being finally marked as failed in the run.

    [Test, Retry(3)]
    public static void TestRandomlyEven()
    {
        Assert.That(Random.Shared.Next(), Is.Even);
    }
    

    These retried failures, however, have only been for when a test fails an assertion. Test failures due to unhandled exceptions are not retried. So the below test will also fail about half the time it is run, but those failures will not be retried and the test will immediately be treated as failed.

    [Test, Retry(3)]
    public static void TestRandomlyEven()
    {
        if (Random.Shared.Next() % 2 == 0)
            throw new InvalidOperationException("Odd number.");
        Assert.Pass();
    }
    

    This may be desirable based on how a test is written and if you want to be alerted to any instability or potential flakiness. There may be other cases, such as detailed system integration tests, where one may want network or database exceptions to be retried. Writing a custom attribute to support this is quite easy as most of the pieces are already exposed as public types within NUnit. The below was written against NUnit 4.4 but should work on any lower version too.

    [Test, RetryOnExceptionAttribute(3)]
    public static void TestRandomlyEven()
    {
        if (Random.Shared.Next() % 2 == 0)
            throw new InvalidOperationException("Odd number.");
        Assert.Pass();
    }
    
    public class RetryOnExceptionAttribute(int tryCount) : NUnitAttribute, IRepeatTest
    {
        public TestCommand Wrap(TestCommand command)
        {
            return new RetryAttribute.RetryCommand(new FailOnExceptionCommand(command), tryCount);
        }
    
        private class FailOnExceptionCommand(TestCommand innerCommand) : DelegatingTestCommand(innerCommand)
        {
            public override TestResult Execute(TestExecutionContext context)
            {
                try
                {
                    return innerCommand.Execute(context);
                }
                catch (Exception ex)
                {
                    context.CurrentResult.SetResult(ResultState.Failure, ex.Message, ex.StackTrace);
                    context.CurrentResult.RecordTestCompletion();
                    return context.CurrentResult;
                }
            }
        }
    }
    

    Fortunately the ability to retry on specific exceptions will be available as part of NUnit 4.5. A new RetryExceptions property will be added to the attribute that can be given an array of Types of exceptions to retry.

    [Test, Retry(3, RetryExceptions = [typeof(InvalidOperationException))]
    public static void TestRandomlyEven()
    {
        if (Random.Shared.Next() % 2 == 0)
            throw new InvalidOperationException("Odd number.");
        Assert.Pass();
    }
    

    The entries are treated as base classes so retrying all exceptions will become as simple as:

    [Test, Retry(3, RetryExceptions = [typeof(InvalidOperationException))]
    public static void TestRandomlyEven()
    {
        if (Random.Shared.Next() % 2 == 0)
            throw new InvalidOperationException("Odd number.");
        Assert.Pass();
    }
    

    A big thank you to manfred-brands for having added this feature recently.

  • Async Enumerable Test Sources in NUnit

    In my previous post I showed how to use awaitable TestCase or Value sources in NUnit 3.14. NUnit 4 continues the story of adding async support by also allowing TestCaseSource, ValueSource, or TestFixtureSource to return an IAsyncEnumerable.

    public class AsyncTestSourcesTests
    {
        [TestCaseSource(nameof(MyMethodAsync))]
        public void Test1Async(MyClass item)
        {
        }
    
        public static async IAsyncEnumerable<MyClass> MyMethodAsync()
        {
            using var file = File.OpenRead("Path/To/data.json");
            await foreach (var item in JsonSerializer.DeserializeAsyncEnumerable<MyClass>(file))
            {
                yield return item;
            }
        }
    
        public class MyClass
        {
            public int Foo { get; set; }
            public int Bar { get; set; }
        }
    }
    

    As with IEnumerable-backed sources, NUnit will lazily enumerate the collection to avoid bringing all the objects into memory at once, instead only generating the test or value sources when needed.

    Many async enumerable operations also require async disposal of the underlying resource after enumerating the collection. NUnit will also take care of calling the dispose method to ensure everything is cleaned up properly. In the event that both DisposeAsync and Dispose methods are present then NUnit will only call the asynchronous DisposeAsync method and not the Dispose method.

  • Async Test Sources in NUnit

    NUnit has long supported the definition of test cases in numerous forms, including via inline primitive data via the TestCaseAttribute or potentially more complex data returned from a method, property, or other source at runtime via TestCaseSourceAttribute. The latter has typically only supported synchronous methods. This complicated defining data-driven test cases where an internal operation required calling a Task-based API. A common example of this could be a test case which reads from a JSON source file or other stream using a method like JsonSerializer.DeserializeAsync().

    In the past this would mean an awkward and unnatural call using something like .GetAwaiter().GetResult():

    public class Tests
    {
        [TestCaseSource(nameof(MyMethod))]
        public void Test1(MyClass item)
        {
        }
    
        public static IEnumerable<MyClass> MyMethod()
        {
            using var file = File.OpenRead("Path/To/data.json");
            var t = JsonSerializer.DeserializeAsync<IEnumerable<MyClass>>(file).AsTask();
    
            return t.GetAwaiter().GetResult();
        }
    }
    

    NUnit 3.14 was released a few months ago and included support for “async” or task-based test case sources. Now a TestCaseSource can target a Task-returning method to allow for much more natural code:

    public class Tests
    {
        [TestCaseSource(nameof(MyMethodAsync))]
        public void Test1Async(MyClass item)
        {
        }
    
        public static async Task<IEnumerable<MyClass>> MyMethodAsync()
        {
            using var file = File.OpenRead("Path/To/data.json");
            return await JsonSerializer.DeserializeAsync<IEnumerable<MyClass>>(file);
        }
    }
    

    The above example focuses on Task, but any awaitable type such as ValueTask or a custom awaitable also works. Other “source” attributes such as TestFixtureSource or ValueSource are supported as well.

  • Creating a Grunt plugin on Windows in MINGW

    I’ve been working with grunt a bit lately, and have found the need for a task that there doesn’t seem to be in the expansive list of existing plugins. So I thought I’d create my own. Fortunately, grunt has a great and simple page on doing this (http://gruntjs.com/creating-plugins).

    Unfortunately, I ran into some issues.

    I usually like to work in the Git Bash shell that comes with MINGW. Trouble is, this was causing some pathing issues. Specifically, with these two commands:

    1. Install the gruntplugin template with git clone git://github.com/gruntjs/grunt-init-gruntplugin.git ~/.grunt-init/gruntplugin(%USERPROFILE%\.grunt-init\gruntplugin on Windows).
    2. Run grunt-init gruntplugin in an empty directory.

    Apparently MINGW, or at least my version, has some issues resolving %USERPROFILE%. So I ended up with a cloned git repo in my local directory called %USERPROFILE%.grunt-initgruntplugin. After fixing that and moving it to my root user profile I kept geting an “EINVAL” issue on the next command. I figured this had to be a pathing issue too. So I dropped out of MINGW into a cmd shell by typing cmd. Except that didn’t quite do it. Maybe MINGW was intercepting input and filtering it in an unexpected way, but my problems became even worse. So my fix?:

    1. Add git to your OS Path variable (C:\Program Files\Git\bin)
    2. Run a regular command shell (cmd outside of MINGW)

    With those two small changes, everything worked flawlessly.

  • Determining the version of MINGW

    As a Windows developer who uses git and gcc, I found it easiest to install MINGW to help work in a console (Git Bash here is a fantastic shell extension!). Unfortunately, it’s been a while since I installed it and I forget the version I’m using. After a bit of googling, it seems someone figured out years ago how to figure this out in a shell script (stahta01)

    http://forums.codeblocks.org/index.php?topic=9054.0

    Just so I don’t have to go searching again for it, I’ve copied his/her script and included it below:

    @echo off
    REM version-of-mingw.bat
    REM credit to Peter Ward work in ReactOS Build Environment RosBE.cmd it gave me a starting point that I edited.
    ::
    :: Display the current version of GCC, ld, make and others.
    ::
    
    REM %CD% works in Windows XP, not sure when it was added to Windows
    set MINGWBASEDIR=C:\MinGW
    REM set MINGWBASEDIR=%CD%
    ECHO MINGWBASEDIR=%MINGWBASEDIR%
    SET PATH=%MINGWBASEDIR%\bin;%SystemRoot%\system32
    if exist %MINGWBASEDIR%\bin\gcc.exe (gcc -v 2>&1 | find "gcc version")
    REM if exist %MINGWBASEDIR%\bin\gcc.exe gcc -print-search-dirs
    if exist %MINGWBASEDIR%\bin\c++.exe (c++ -v 2>&1 | find "gcc version")
    if exist %MINGWBASEDIR%\bin\gcc-sjlj.exe (gcc-sjlj.exe -v 2>&1 | find "gcc version")
    if exist %MINGWBASEDIR%\bin\gcc-dw2.exe (gcc-dw2.exe -v 2>&1 | find "gcc version")
    if exist %MINGWBASEDIR%\bin\gdb.exe (gdb.exe -v | find "GNU gdb")
    if exist %MINGWBASEDIR%\bin\nasm.exe (nasm -v)
    if exist %MINGWBASEDIR%\bin\ld.exe (ld -v)
    if exist %MINGWBASEDIR%\bin\windres.exe (windres --version | find "GNU windres")
    if exist %MINGWBASEDIR%\bin\dlltool.exe (dlltool --version | find "GNU dlltool")
    if exist %MINGWBASEDIR%\bin\pexports.exe (pexports | find "PExports" )
    if exist %MINGWBASEDIR%\bin\mingw32-make.exe (mingw32-make -v | find "GNU Make")
    if exist %MINGWBASEDIR%\bin\make.exe (ECHO It is not recommended to have make.exe in mingw/bin)
    REM ECHO "The minGW runtime version is the same as __MINGW32_VERSION"
    if exist "%MINGWBASEDIR%\include\_mingw.h" (type "%MINGWBASEDIR%\include\_mingw.h" | find "__MINGW32_VERSION" | find "#define")
    if exist "%MINGWBASEDIR%\include\w32api.h" (type "%MINGWBASEDIR%\include\w32api.h" | find "__W32API_VERSION")
    
    :_end
    PAUSE
    

    On my machine, it outputs exactly what I needed:

    MINGWBASEDIR=C:\MinGW
    gcc version 4.8.1 (GCC) 
    gcc version 4.8.1 (GCC) 
    GNU gdb (GDB) 7.6.1
    GNU ld (GNU Binutils) 2.24
    GNU windres (GNU Binutils) 2.24
    GNU dlltool (GNU Binutils) 2.24
    GNU Make 3.82.90
    #define __MINGW32_VERSION           3.20
    #define __W32API_VERSION 3.17
    Press any key to continue . . . 
    
  • Nod of the hat to integrating Popcorn js and BBB (Big Blue Button)

    It looks like a few people have been hitting my blog trying to find information on integrating Popcorn.js and Big Blue Button. I thought I’d take the opportunity to give a nod of the hat to a colleague, dseif, for his recent contribution towards making this possible at Hackanooga.

    THIS LINK has all the cool details.