June 22, 2004

Team System Testing: Microsoft just might have it right

The history of Microsoft dev tools is riddled with folk lore about how Microsoft rarely communicates with developers until it is way too late. Peter Provost's petition is an example of how developers push back in an effort to get Microsoft to listen when they seem to fall out of line. I had a refreshing experience at the geek dinner that shows that is not always the case. First off, Jason and Tom came out in RESPONSE of those concerns and to provide rebuttal on Microsoft's view to us. More importantly though was that in an effort to hear our thoughts in the community on this, they came to listen to our customer feedback on Team System. I really appreciated that. No one told them to come to the dinner. No one asked them to give their perspective in such a forum. They just did it. Having the opportunity to discuss WHY we feel this way about the unit testing was nice, and it was the right way to address customer concerns early in the process. During the dinner Jason was continuously challenged on the decision of not including unit testing in the base dev tools. Different people had different views, and I think we all were focused on only a small aspect of what Jason was offering in rebuttal. But a core 'challenge' theme continued to drive the discussion.

So what was Jason challenged on at the geek dinner? Explain to us why Microsoft decided to integrate everything so tightly together. With the gauntlet down he responded... in a way that no one would have expected. He decided to take us back to the Microsoft campus, found us a conference room in Building 41 and gave us a demo on the daily build. This demo was apparently even deeper than TechEd, which was cool.

Robert was tasked with seeing about getting some footage for Channel9 at the geek dinner, and when Jason offered to take us to see Team System I made sure Robert had the opportunity to come and record it. He got over two hours of video so hopefully some of that will make it up on MSDN in the next few weeks. I also got an opportunity to grab some stills of the demo on my digital camera, which I have uploaded to my Gallery. Unfortunately most of them didn’t turn out well. Let's hope Robert's shots do better.

I wish I really could blog the experience as I saw it. The problem is I am sure others who were there got different 'take aways' than I did, leaving with an incomplete picture. As others blog about it I am sure Jason or Robert will link to their experiences to give you a more rounded experience as a whole. So make sure you check out their blogs over the next couple of days.

With that said, let's talk about my experience with the demo. I now understand why Microsoft coupled everything so tightly; the integration of the entire test suite considerably strengthens their offering as a heterogeneous solution that works in the existing IDE tools. There is no huge learning curve in adapting testing techniques into the tools you already use. They have reduced the complexity of this process which in turn should expose more developers to it, resulting in the potential of more use... ultimately creating higher quality code. Think about it... unit testing and test–driven development isn't the number one priority on a lot of dev teams, even though statistically it shows it will reduce costs and produce higher quality software against the normal software development life cycle. (Lets not argue on this point... there are far better people out there to fight with those stats than I) If this just becomes another part of the development process within the tools they are already familiar with, the barrier to entry for test integration is considerably lessened. I have a poor screenshot showing how tests get integrated directly into the solution; another shot shows how in the pane where properties normally exist how a new “Test View” tab allows you to quickly see and execute your tests. I even got a shot of the test results window which quickly can show you a pass/fail list of the tests that you run. Notice something important here… these are directly integrated into Visual Studio as just another component in the existing docking model; there is no further screen complexity or clutter to get the tools to you!!

I still hold to my signature on the petition of having some rudimentary unit testing in the entry level product for VS2005. It is definitely possible to separate out parts of the unit testing, but unless you see the bigger picture, you won't realize that what Microsoft is offering is actually much more, and is WORTH the investment in moving to the higher skews for Team System anyways. I think that having the basic unit testing framework in the entry level products could go a long way to covertly transfer the body of knowledge for unit testing to new or inexperienced developers, creating an upgrade path to the bigger system as they need it.

If you want separated unit testing, you can (and should) use NUnit now. (Remember Team System is still a ways off) NUnit is a great unit-testing framework and is available now. More importantly is that your work on the nunit tests won't be wasted/lost if you move to Team System later. Their unit testing is very similar to NUnit by using attribute based tags to set and deliver tests. Microsoft has also included a 'generic' test type which has the potential to allow you to map the NUnit style testing framework into Team System.

One feature I REALLY liked was the fact you can right click on a namespace, class or method, and generate the ENTIRE testing harness framework in a single pass. I am not sure if some of the people there realized how powerful that is. To be able to generate the entire harness without writing a single line of code does a lot to 'dumb' down the tedious parts to allow a developer to get right to the test code. This will go way further than preaching about why unit tests are good any day... since now you can just jump in and do it.

Am I making sense? Consider this: Most documentation on unit testing explain what its about, but rarely show a practical harness that can be used in the code you care about... your own master sources. Microsoft's approach immediately BUILDS that for you, alleviating a major hurdle in the unit testing process... allowing the developer to immediate get to the heart of the test code without worrying about how to integrate it. Of course there is still the learning curve on what and how to test... but this is a pretty major leap forward in the integration between unit tests and the master sources in your Visual Studio project/solution.

But it didn't end there. Unit testing has been around for ages, it's just that most people don't use it. Hopefully now that weakness can be resolved with Team Studio. Something that turned my crank more was the idea of test-driven development within Team Studio. Microsoft got this one right. And in a very neat way.

I was turned onto the idea of test-driven development when a few of the developers I managed at my last company bought me Extreme Programming Explained: Embrace Change. I tried to have an open mind, but at the time could not justify the migration to the XP development process past the idea of some pair programming and simple story/task cards. I should also state that I have not done any real test-driven development myself, mostly because I have been focused on learning other new development processes; well to be honest I have been to lazy to write the tests first… with the voice in the back of my mind echoing 'redundant effort… redundant effort'. Well, I think that will change in Team Studio. They have integrated awesome refactoring and test-driven development tasks directly into the system. My jaw dropped when they created a test and then GENERATED THE STUB for the code I would then need to write. Let me be clear here... Microsoft has taken the work out of it for you. You can write a basic test in the system, focusing on what you EXPECT to come in via params and what you will return etc... and Team Studio will then generate the methods within the class you are building. I tried to get a shot of the refactor right click menu, but it didn’t turn out very well.

Another feature which I REALLY liked was Microsoft's hooks for code coverage. They can analyze your testing harnesses and create a report showing you the code coverage from your tests. Not only can they should you by percentage of the code coverage in any section of code, they can VISUALLY show you the lines of code that are never reached. This is really important; you can graphically see when your tests are not properly covering all your code blocks. I have one shot where you can see by color code that a particular line is never hit, showing that the code execution path isn’t fully tested. I have another one that will even shows an entire block that was missed when an exception was ALWAYS occuring.

Taking a tangent here, I am sure what I am about to say will have some people disagreeing with me with venomous debate about testing techniques. That's ok… you are welcome to your opinion. But this is my blog, and I have trapped you into reading mine :) So please bare with me.

I believe that testing is MORE vital in the failure code paths than the normal execution blocks. My reasoning is that in my experience normal operations get tested during both functional and unit testing anyways... rarely are failures tested to see how resilient the code is on failure. Many attack vectors have shown to compromise code through the results of failure or state change, rather than execution of success. As such, code coverage is an awesome tool to ensure you can write particular 'resiliency' tests and force you to see if you are hitting those exception handling routines. For people writing kernel mode code, I think it's even more paramount. Structured Exception Handling (SEH) is routinely used incorrectly causing bigger problems in the kernel than if they just let the system BSOD. I have seen ugly code where on failure a trap does nothing more than allow the code execution to continue, typically in an unstable state which will end up crashing anyways upstream on someone else's call stack, causing errors in the wrong part of the system. Microsoft's WHQL test process requires kernel code to have a 70% code coverage during testing... now you can use the same metrics against your usermode code within Team System.

There is so much other stuff I saw, my brain hurts. Besides unit tests, Team System includes neat testing types such as Web, Manual, and Load. The web tests were interesting as you could record a session and then play it back. I could see some interesting ways to do tainted data injection testing with this. The manual tests were a way to have a documented process on how to manually test something, and have it come full circle back into Test System in a structured manner. The load tests were interesting as you could apply a collection of events to test load scenarios; want to see how your web service would handle being slashdotted?? Be my guest. They have even integrated bug reporting and source control directly into this entire process; you can now right click on a test that failed and immediately file it as a bug directly within the IDE. I got a shot of this, but like most of the others it didn't turn out very well.

All and all, I now understand why Microsoft believes that unit testing is only one piece of the puzzle. Combining different testing types with code coverage, source control, bug tracking and profiling makes a compelling reason to use a single tool to remove the complexity and integrate into a single, heterogeneous solution. Team System may very well be that solution for many a developer on the Windows platform.

If I have one beef with Team System, it is actually something Microsoft thinks is its biggest strength. It is so tightly integrated that it will not be easy to integrate external source control and bug management tools directly. This could be bad. A development environment who may have shelled out tens of thousands of dollars for a redundant Perforce solution isn't going to look to kindly on the thoughts that you need to now use Microsoft's new and untested source control management system. (Lets not get into the old discussion about how much VSS sucks... everyone knows it... including Microsoft). For me I use Subversion with TortoiseSVN (you can read about my experiences setting it up here) and am now looking at moving my bug reporting tool from BugZilla to something like FogBugz. The last thing I want to do is have to alter our development process again... I am just finishing doing that. I want my existing repo to 'just work'. I'm afraid thats not possible.

Maybe that will change by the time VS2005 ships. Of course, everything I was shown can change by then... so I will reserve judgement till I have it in my hot little hands. But from what Tom and Jason have just shown me I think Microsoft just might have done it right.

Posted by SilverStr at June 22, 2004 01:15 AM | TrackBack
Comments

If I may recommend a bug tracking tool, and I've looked long and hard myself, it's Trac (http://www.edgewall.com/products/trac/). It's free, it's web-based, and it integrates directly with Subversion. It does require Python and Apache, but it's the most comfortable bugtracker I've seen.

(And if you want something with a little more functionality, there's scarab (http://scarab.tigris.org) by the same people who wrote subversion. Requires Tomcat and is very complex.)

Posted by: Gunther Schmidl at June 23, 2004 04:33 AM

.... and one ring to rule them all.

Posted by: Arcterex at June 23, 2004 08:08 AM

There's always NCover.

http://www.lazycoder.com/weblog/index.php?p=114

Posted by: Scott at June 23, 2004 11:45 AM