TFS – Merging changes from a renamed branch – No matching items found in $/[Project] at the specified version

Merging changes in TFS through Visual Studio may fail if the branch you are merging from has been renamed, or moved under or out of a folder, with the following error:

No matching items found in $/[Project] at the specified version

Reproducing

  • Branch “Main” to “v1″
  • Check in a change to the “v1″ branch
  • Rename the branch to “v2″, (or change the path in any way, like putting it under a folder)
  • Try to merge the changeset you checked into “v1″ back into the “Main” branch.

Result
You will get an error similar to “No matching items found in $/[Project]/v2 at the specified version”

Fix
In order to overcome this, you need to specify the old path for the merge, which you are unable to do through the UI in Visual Studio. To do this, open up the visual studio command prompt and run the following:

tf merge “$/[Project]/v1″ “$/[Project]/Main” /recursive /version:C[ChangesetNumber]

Be sure to change [Project] to whatever your team project name is, and [ChangesetNumber] to whatever changeset you are merging from the branch into the main branch; or some other version specifier if you aren’t merging from a changeset number.

See the documentation on merging on msdn for further information.

Dormant Bugs

Setup
Today I had to deal with a “show-stopper” bug for a project. Upon first glance it looked like the bug was just introduced in the latest development cycle. An area of the application that behaved normally prior to this round of development is now deviating from the spec, but only in certain scenarios. If it was introduced, that means I could have been the developer to introduce it, which is fine; we all write buggy code. I would just rather not be the developer introducing this bug. I tend to think my code is clean, and elegant. And of course, I was interested in how my tests missed this.

Reproduction
After getting a backup of the QA database, I was able to reproduce the issue. This was a relief, since on the outset, it looked like it could have been a concurrency issue. Instead, it was easy to track down via debugging. What it boiled down to was something analogous to the following code:

bool exists = list.Any(item => item.SomeType == "abc");

if (exists)
{
    return list.Where(item => item.SomeBoolean);
}

Context
At the time the code was written, if item.SomeType was “abc”, then item.SomeBoolean would be true. In this case, the converse was also true; if item.SomeBoolean was true, then item.SomeType was “abc”. The two properties were implicitly coupled together.

Analysis
The problem is that we now have logic that depends on the coupling. What the developer wanted to return, was the items whose “SomeType” property was “abc” (trust me). Since he knew that this was the same as looking at the “SomeBoolean” property, he depended on that property for the return statement. The implicit nature of this coupling was manifested as a set of assumptions in the mind of the original developer, that guided the logic behind his code. That was a mouthful, but take a moment to grok it; understand what it means for other developers. The code worked, at the time it was written. However, it is obviously brittle. It is not future-proof. Well, it’s the future now, and the logic no longer holds water.

In other words
To frame the issue in other words, imagine writing a software system that manages people. You have a requirement to group people by their hair color. For some unknown reason, the requirement mandates that the app only needs to support blonds and brunettes. If this is the case, then getting a list of all people that do NOT have blond hair, is equal to getting a list of all people that have brown hair. Any developer worth his salt would never use that logic though. Six months down the road, you might get a new requirement to support red-heads, or white-hairs, or purple-hairs, or no-hairs. This means you have to be aware of the assumptions you’ve previously made, and correct them. This is doable, but we shouldn’t need to.

Back to reality
The “show-stopper” was not far removed from this hypothetical scenario. Your code should never make assumptions. This is defensive programming 101. Maybe I should similarly not make assumptions about the quality of the codebase. There is a certain level of trust we need to have in our fellow programmers, and certain assumptions that we need to be able to make about a code base.

Remove and Sort Usings… ALT e i o a

I like clean code. When I work in C#, that includes sorting my using statements, as well as removing unused using statements. It has been a habit of mine for a while now, that whenever I reference a new type that requires a new using statement in a file, I automatically type “ALT e i o a” to invoke the “Remove and Sort” menu. It looks a little long, but pound it into your head; it is very easy to type.

Common Table Expression for Fibonacci

More for my amusement than anything; the following t-sql query utilizing a common table expression can be used to show the beginning of the Fibonacci sequence, as well as it’s convergence and relationship with the golden ratio.

;WITH [Fibonacci] AS
(
	SELECT
		0 AS [Number],
		1 AS [NextNumber],
		1 AS [Index],
		CAST(NULL AS FLOAT) AS [Ratio]
	UNION ALL
	SELECT
		[NextNumber],
		[Number] + [NextNumber],
		[Index] + 1,
		CASE
			WHEN [Number] = 0 THEN NULL
			ELSE CAST([NextNumber] AS FLOAT) / [Number]
		END
	FROM
		[Fibonacci]
	WHERE
		[Index] < 25
)
SELECT
	[Number],
	[Ratio] AS [Ratio approaching golden ratio]
FROM
	[Fibonacci]

Note that this is a more simple version of the fibonacci CTE found on Manoj Pandey’s blog. I’ve simplified his CTE, removing an unnecessary field, only to complicate it up again with the additional ratio calculation.

Microsoft Fakes – Visual Studios Ultimate Mistake

Having just recently installed VS 2012 Ultimate, I naturally wanted to unit test a program I had begun coding. Being a big fan of Dependency Injection and mocking, my first instinct was to Nuget up a reference to Moq in my test project. I began to wonder what other options I might have new to me with VS 2012, and remembered reading that VS 2012 has natively subsumed the Moles Framework, and renamed it Microsoft Fakes after some modifications.

I was initially excited to try out Fakes, with the intention of using it with my team at work, once everyone, and our projects were upgraded to VS 2012. That excitement soon turned to disappointment when I found out that Microsoft Fakes is only supported on VS 2012 Ultimate. This is confusing, since pre-release information suggests that Fakes are available for Premium as well as Ultimate. See StackOverflow user Chai’s response to user Dan Sorensen’s question about adding a Fakes assembly to VS 2012 Professional RC. Chai asked a similar question on Microsoft’s Connect portal – possibly here – and the answer was disconcerting.

The RC documentation was incorrect. Fakes are available only in VS Ultimate.

Unit testing should be integral to any software development team. The barrier to entry to testing technologies should be as low as possible. I can understand having tiered versions of Visual Studio, and that not every developer is going to need to create architectural layer diagrams. That makes sense. What also makes sense is that every developer should be writing unit tests.

Unit testing is something that is so foundational to development, that unit testing functionality, and by extension, functionality that enables unit testing, like Microsoft Fakes, should be available to all editions of Visual Studio.

Not everyone on my team has Visual Studio Ultimate. This means that they are unable to use Microsoft Fakes. This discourages the use of Fakes by the developers with Ultimate. I’m not going to push to get all my coworkers Ultimate. They shouldn’t need Ultimate.

While I am currently disappointed and discouraged by Microsoft’s decision, I am hopeful that Microsoft will reverse course and enable Fakes in versions of Visual Studio 2012 other than Ultimate.

Force StyleCop to Ignore a File

There are multiple ways to cause StyleCop to ignore a file. You might want to do this for instance, if you turn on StyleCop for a legacy project. Rather than spending hours getting every file in the legacy project to pass StyleCop, you can have immediate benefits by turning it on for the project, and then selectively turning it off for the legacy files in the project that violate StyleCop. This means that any new files added to the project can have StyleCop enforced by default.

Option 1 – The “ExcludeFromStyleCop” element in a .csproj file

You can quickly force StyleCop to ignore files in a project by manually modifying the project file, and inserting the following element under each file being compiled that you wish to have ignored:

<Compile Include="AViolatingFile.cs">
    <ExcludeFromStyleCop>true</ExcludeFromStyleCop>
</Compile>

This option is particularly nice if you want to turn off StyleCop for many files in a project – think legacy projects. You only need to edit one file instead of many. This method is described in StyleCop’s documentation for working with legacy projects.

Option 2 – the auto-generated file header comment

StyleCop will ignore any file that is auto-generated by a tool. You can take advantage of this fact by tricking StyleCop into thinking your code is auto-generated. This is obviously a kludge, but if you really wanted to do this, you simply add the following one line comment to the top of your code file:

// <auto-generated/>

Option 3 – the generated code #region directive

StyleCop will ignore #regions of code that are auto-generated by a tool. This is another kludge, but to take advantage of this fact, you just need to add a region ending with the text “generated code”. The example given in the StyleCop documentation is that of a typical InitializeComponent method. For instance:

#region Component Designer generated code
/// <summary>
/// Required method for Designer support - do not modify
/// the contents of this method with the code editor.
/// </summary>
private void InitializeComponent()
{
}
#endregion

You can extend one of these “generated code” #regions to encapsulate your entire class file if you wish.

Forcing StyleCop Violations to Break a Build in Visual Studio and MSBuild

From StyleCop’s website:

StyleCop analyzes C# source code to enforce a set of style and consistency rules. It can be run from inside of Visual Studio or integrated into an MSBuild project. StyleCop has also been integrated into many third-party development tools.

If you have installed StyleCop, then by default, StyleCop violations only appear as warnings when you do a build. This is fine, unless you want StyleCop violations to fail a build, such as when your team uses a continuous integration build or a gated check-in. In order to cause StyleCop violations to fail your build, you need to add the following element to each of your .csproj files:

<StyleCopTreatErrorsAsWarnings>
    false
</StyleCopTreatErrorsAsWarnings>

You should place this “StyleCopTreatErrorsAsWarnings” element under the Project/PropertyGroup element. In addition to this element, you need to add an import element in order to import the StyleCop MSBuild targets. This will look something like the following:

<Import Project="$(ProgramFiles)\MSBuild\Microsoft\StyleCop\v4.4\Microsoft.StyleCop.targets" />

This “Import” element should be a child of your Project element. You can see that the attribute represents a path to an MSBuild targets file. The path also includes a version number for the particular version of StyleCop I have installed. You may have to adjust the path as necessary.

RGB for Beginners

More than once I’ve been surprised to learn that a coworker has no idea how to read RGB values. If you work in html or CSS or [insert your favorite UI technology here], you should have a basic understanding of the RGB color model.

Red Green Blue or RRGGBB
Colors are often written using six hexadecimal values, preceded by a pound or hash sign, where the first two represent red, the middle two represent green, and the last two represent blue. So, the color “#12F547″ means that there is a red value of “12″, a green value of “F5″, and a blue value of “47″. The lower the value, the less of that color that goes into the final resulting color.

White and Black
The range of possible values for just one portion of the color is “00″ to “FF”. If all three of the colors are 0, then the final color is black. So, “#000000″ is equal to black. Conversely, if all three of the colors are as high as possible, the resulting color is white. So, “#FFFFFF” is white.

Shades of Gray
You make shades of gray by setting the R and the G and the B values to the same Hexadecimal number. So, “#707070″ is a middle of the road gray color, because the red portion, as well as the green, and the blue portions are all “70″ which is roughly half way between “00″ and “FF”. Likewise, “#C2C2C2″ is a fairly light gray, because each portion carries the value “C2″, which is pretty close to “FF”, or white. Also, “#272727″ is a very dark gray, because each portion carries the value “27″, which is close to “00″, or black.

#FFFFFF  
#EEEEEE  
#DDDDDD  
#CCCCCC  
#BBBBBB  
#AAAAAA  
#999999  
#888888  
#777777  
#666666  
#555555  
#444444  
#333333  
#222222  
#111111  
#000000  

Just Red
If we wanted just a red value, we would raise the red portion of the color, and lower the green and blue portions. So, “#FF0000″ is fully red. If we wanted a darker red, we would reduce the red value. So, “#770000″ results in a darker red.

#FF0000  
#EE0000  
#DD0000  
#CC0000  
#BB0000  
#AA0000  
#990000  
#880000  
#770000  
#660000  
#550000  
#440000  
#330000  
#220000  
#110000  
#000000  

Mixing Colors
By mixing different values of red, green, and blue, we can find other colors. Some simple ones are Red + Green = Yellow, Red + Blue = Magenta, and Green + Blue = Cyan.

#FFFF00  
#FF00FF  
#00FFFF  

RGBA or ARGB
In some technologies, like Microsoft’s WPF, you can specify an alpha value to control the transparency of the color. A value of “FF” is fully on, so it is fully opaque, whereas a value of “00″ is fully off, or transparent. Depending on the technology, the alpha value might come at the beginning of the color, or at the end. In WPF, it comes at the beginning, so a value of “#FF006400″ is equal to a fully opaque dark green color.

rgb(255,255,255)
Some technologies, like CSS, allow you to specify colors using a format of rgb(x,y,z) where x, y, and z are all numbers from 0 to 255 and x represents the red value, y represents the green value, and z represents the blue value. In case you are not familiar with hexadecimal, the decimal value 255 is equal to the hexadecimal value “FF”, so rgb(255,0,255) is equal to #FF00FF.

Conclusion
Now, when reviewing code, you should immediately be able to recognize that #00FF00 and rgb(0,255,0) both represent green colors. You should also be able to tell that #009A00 is a darker green than #00FF00.

Javascript Unit Tests with QUnit and jQuery

Are you unit testing your javascript code? There are numerous javascript unit testing frameworks out in the wild; JsUnit, rhinounit, QUnit, YUI’s Yeti, js-test-driver, etc. In fact, here is a stackoverflow thread listing all of these and more. I’ve recently used QUnit which is a framework written on top of, and maintained by the folks over at jQuery.

In order to use QUnit to test your javascript code, you create a single html page to house your tests. In the header of your page, you link to the jQuery script file, a QUnit script file, and a QUnit stylesheet. In the body of your html page, you insert the following markup, which will be used by QUnit as a console to output the results of your tests.

    <h1 id="qunit-header">Your Test Header</h1>
    <h2 id="qunit-banner"></h2>
    <div id="qunit-testrunner-toolbar"></div>
    <h2 id="qunit-userAgent"></h2>
    <ol id="qunit-tests"></ol>
    <div id="qunit-fixture"></div>

Once these things are in place, you can begin writing unit tests for your javascript. To do this, QUnit provides a function called “test” which takes a string parameter that acts as the name of the test in the results, and a function parameter, which is just an anonymous function containing assertion type function calls to other QUnit functions. The details of the test function can be found on QUnits site. Let’s say you were writing a math library, and decided to include an “add” function. A simple test for this function would look like:

test("addition tests", function () {
    equal(add(2, 3), 5, "2 + 3 should equal 5");
});

Or how about a more complicated example. Let’s say you were writing a genetic algorithm engine, and wanted to test your crossover functions. Your functions and their tests might look like:

        $(function () {

            var geneticAlgorithm = {
                onePointCrossOver: function (mom, dad, index) {
                    var result = [];

                    result[result.length] = mom.slice(0, index)
                        .concat(dad.slice(index));

                    result[result.length] = dad.slice(0, index)
                        .concat(mom.slice(index));

                    return result;
                },
                twoPointCrossOver: function (mom, dad, indexA, indexB) {
                    var indexAResult =
                        this.onePointCrossOver(mom, dad, indexA);

                    return this.onePointCrossOver(
                        indexAResult[0], indexAResult[1], indexB);
                }
            };

            function compareResults(actual, expected) {
                equal(actual.length, expected.length,
                    "The length should be the same.");
                equal(actual.length, 2,
                    "There should be two children.");

                compareChild(actual[0], expected[0]);
                compareChild(actual[1], expected[1]);
            }

            function compareChild(actual, expected) {
                equal(actual.length, expected.length,
                    "The number of genes should be the same.");

                for (var i = 0; i < expected.length; i++) {
                    equal(actual[i], expected[i],
                        "The gene at '" + i + "' should be the same.");
                }
            }

            module("Cross Over tests");

            test("onePointCrossOver tests", function () {
                var mom = [1, 2, 3, 4];
                var dad = [5, 6, 7, 8];

                var childA = [1, 2, 7, 8];
                var childB = [5, 6, 3, 4];

                var actual = geneticAlgorithm
                    .onePointCrossOver(mom, dad, 2);
                var expected = [childA, childB];

                compareResults(actual, expected);
            });

            test("twoPointCrossOver tests", function () {
                var mom = [1, 2, 3, 4, 5, 6];
                var dad = [7, 8, 9, 10, 11, 12];

                var childA = [1, 2, 9, 10, 5, 6];
                var childB = [7, 8, 3, 4, 11, 12];

                var actual = geneticAlgorithm
                    .twoPointCrossOver(mom, dad, 2, 4);
                var expected = [childA, childB];

                compareResults(actual, expected);
            });
        });

You can view the results of the tests here. Notice that you can click on each of the tests in the results to get more details. If any of the tests had failed, the failing tests would be marked in red, and expanded, showing the expected value, and the actual value, along with other helpful information.

Basic Diff with a generic solution to the Longest Common Subsequence Problem

At the heart of a basic string comparison is what is called the Longest Common Subsequence problem.
http://en.wikipedia.org/wiki/Longest_common_subsequence_problem

The algorithm itself is normally used for strings, but there is nothing to prevent you from using it for a collection of anything. After all, a string is merely a collection of characters. I’ve modified the algorithm slightly to use a generic C# collection. Before showing you the code, it is probably useful to show you a simple unit test to show how the diff class can be used.

[TestMethod()]
public void FindDifferenceTest()
{
    Collection<int> baseline = new Collection<int> { 0, 1, 2, 3 };
    Collection<int> revision = new Collection<int> { 0, 1, 4, 3, 5 };

    IList<ComparisonResult<int>> expected = new List<ComparisonResult<int>>
    {
        new ComparisonResult<int>{ DataCompared = 0, ModificationType = ModificationType.None },
        new ComparisonResult<int>{ DataCompared = 1, ModificationType = ModificationType.None },
        new ComparisonResult<int>{ DataCompared = 2, ModificationType = ModificationType.Deleted },
        new ComparisonResult<int>{ DataCompared = 4, ModificationType = ModificationType.Inserted },
        new ComparisonResult<int>{ DataCompared = 3, ModificationType = ModificationType.None },
        new ComparisonResult<int>{ DataCompared = 5, ModificationType = ModificationType.Inserted }
    };

    CollectionDiffer<int> differ = new CollectionDiffer<int>();
    IList<ComparisonResult<int>> actual = differ.FindDifference(baseline, revision);

    Assert.IsNotNull(actual);
    Assert.AreEqual(expected.Count, actual.Count);

    for (int index = 0; index < expected.Count; index++)
    {
        ComparisonResult<int> expectedResult = expected[index];
        ComparisonResult<int> actualResult = actual[index];

        Assert.AreEqual(expectedResult.DataCompared, actualResult.DataCompared);
        Assert.AreEqual(expectedResult.ModificationType, actualResult.ModificationType);
    }
}

Of interest in this code are two custom classes, ComparisonResult and CollectionDiffer. The CollectionDiffer class is the class we are most interested in, but seeing that it will return us a collection of ComparisonResult objects, we should probably try to understand this class first. A ComparisonResult contains two fields, one being an enum of type ModificationType, and the other being a generic TypeParam property and represents the value of a portion of the comparison. If this is confusing, just try to grok what is being tested in the unit test.

For reference, the ModificationType and ComparisonResult types look like:

public enum ModificationType
{
    None,
    Inserted,
    Deleted
}

public class ComparisonResult<T>
{
    public ModificationType ModificationType { get; set; }
    public T DataCompared { get; set; }
}

In the unit test you can see that we have two variables declared at the top of the test, baseline and revision. They are both collections of type int. You’ll notice that they are pretty similar in value. This is a pretty trivial example, and you are probably thinking “Why would I need an algorithm to tell me that the 2 has been deleted and the 4 and the 5 have been inserted?”. This is a valid point but only for trivial examples. The hairy thing about a diff algorithm is that without change tracking in place during the actual modifications, there is no way to actually know with 100% what has been deleted or inserted in certain scenarios. For instance, given the string “ABC”, and a revision “ACB”, either of the following scenarios could have happened:

  • B was deleted, and then inserted at the end.
  • C was deleted, and then inserted in the middle.

In fact, those two are just the most likely scenarios, but there are really an infinite number of ways to get from “ABC” to “ACB”.

The same ambiguity applies no matter the type. The point is, letting a proven algorithm decide what has been deleted and inserted will provide consistent results when they are needed. Using an integer for the TypeParam also isn’t all that exciting. You could just as easily make it a char, or a Guid, or anything you like.

Now for the actual C# implementation of a generic solution to the longest common subsequence problem. I’m telling you up front, it’s a little tricky, and this post isn’t about to go into how the LCS algorithm actually works. If you are looking for that information, I suggest wikipedia.

public class CollectionDiffer<T>
{
    public virtual IList<ComparisonResult<T>> FindDifference(
        Collection<T> baseline,
        Collection<T> revision)
    {
        int[,] differenceMatrix =
            GetLCSDifferenceMatrix<T>(baseline, revision);

        return FindDifference(
            differenceMatrix,
            baseline,
            revision,
            baseline.Count,
            revision.Count);
    }

    private static IList<ComparisonResult<T>> FindDifference(
        int[,] matrix,
        Collection<T> baseline,
        Collection<T> revision,
        int baselineIndex,
        int revisionIndex)
    {
        List<ComparisonResult<T>> results = new List<ComparisonResult<T>>();

        if (baselineIndex > 0 && revisionIndex > 0 &&
            baseline[baselineIndex - 1].Equals(revision[revisionIndex - 1]))
        {
            results.AddRange(
                FindDifference(matrix, baseline, revision, baselineIndex - 1, revisionIndex - 1));

            results.Add(new ComparisonResult<T>
            {
                DataCompared = baseline[baselineIndex - 1],
                ModificationType = ModificationType.None
            });
        }
        else if (revisionIndex > 0 && (baselineIndex == 0 ||
            matrix[baselineIndex, revisionIndex - 1] >= matrix[baselineIndex - 1, revisionIndex]))
        {
            results.AddRange(
                FindDifference(matrix, baseline, revision, baselineIndex, revisionIndex - 1));

            results.Add(new ComparisonResult<T>
            {
                DataCompared = revision[revisionIndex - 1],
                ModificationType = ModificationType.Inserted
            });
        }
        else if (baselineIndex > 0 && (revisionIndex == 0 ||
            matrix[baselineIndex, revisionIndex - 1] < matrix[baselineIndex - 1, revisionIndex]))
        {
            results.AddRange(
                FindDifference(matrix, baseline, revision, baselineIndex - 1, revisionIndex));

            results.Add(new ComparisonResult<T>
            {
                DataCompared = baseline[baselineIndex - 1],
                ModificationType = ModificationType.Deleted
            });
        }

        return results;
    }

    private static int[,] GetLCSDifferenceMatrix<T>(
        Collection<T> baseline,
        Collection<T> revision)
    {
        int[,] matrix = new int[baseline.Count + 1, revision.Count + 1];

        for (int baselineIndex = 0; baselineIndex < baseline.Count; baselineIndex++)
        {
            for (int revisionIndex = 0; revisionIndex < revision.Count; revisionIndex++)
            {
                if (baseline[baselineIndex].Equals(revision[revisionIndex]))
                {
                    matrix[baselineIndex + 1, revisionIndex + 1] =
                        matrix[baselineIndex, revisionIndex] + 1;
                }
                else
                {
                    int possibilityOne = matrix[baselineIndex + 1, revisionIndex];
                    int possibilityTwo = matrix[baselineIndex, revisionIndex + 1];

                    matrix[baselineIndex + 1, revisionIndex + 1] =
                        Math.Max(possibilityOne, possibilityTwo);
                }
            }
        }

        return matrix;
    }
}