February 12, 2005

Behind the Scenes

Over the last while, I have had a few of requests to share more information of how I develop my software. Today I noticed that Nick Bradbury (author of HomeSite, TopStyle and FeedDemon) did just that, as well as Adam Stiles (author of NetCaptor). So I figured why not jump on the bandwagon and do the same thing.

This is probably only of interest to fellow developers, but if you would like a glimpse at the software development lifecycle that I use, keep reading.

Everything starts with a MindMap. Any time a new feature is to be added or a new component is to be written... it starts with the MindMap. I use Microsoft's Visio product to do that, since it has built in templates for that. For those that don't know, mind mapping is a powerful way to graphically depict one's thoughts and views on a subject. In this case, we can break any sort of "feature" down into it's basic components and it's interactions with other systems. I then use that for a later stage, which is data flow diagramming.

At this point, depending on the depth of the feature I may write a functional design specification in Microsoft Word using a design template I use in house. This isn't always the case... as we have found that our MindMaps are getting better and better to define feature AND function. When I do write the spec, its short and sweet. There are mock ups of how it should function, and features are linked back to the MindMap. No use duplicating efforts when its not needed. Then I move on to the data flow diagrams once I understand how the feature needs to function.

Again, I use Visio and its templates for that. I can build a data flow diagram (DFD) to provide a visual representative of how the feature will interact with the system and process data. This allows the feature to be later threat modelled and already have that step worked out.

With such a small team threat modelling isn't as effective as I would like. As such, the process I use for threat modelling isn't as structured as I would have it for a larger team where we could have more auditing of the process. Once the feature is identified, broken down into its smallest components and then diagrammed for its data flow, I can identify the areas of most risk and focus on looking at the threats there. Once I have ranked risk using the standard infosec risk formula (RISK = damange/chance, I am still not using DREAD analysis here) I then fill out an editable PDF I created which is a Threat Model template for answering common questions, and applying STRIDE fundamentals to any defined risk. I have been considering using Frank Swiderski's Threat Modeling Tool but I just haven't had a chance to get around to applying it to a production environment. I have played with the tool, but don't use it on a daily basis.

In areas of what I call 'risk depth', I will then build an attack tree in Visio and/or Word (depending on the depth) to ensure those risks are mitigated. With limited time I found that this step can much more clearly show risks, and more importantly show what safeguards (if any) will be put in place to mitigate them. This usually gives me a glimpse of some design decisions I probably forgot about when I was originally mind mapping. It hasn't been uncommon to come across a basic component of a feature that is riddled with potential holes that I go back to the map and decide if the feature is actually needed, or other ways to approach it. I am sure some people would *shudder* at the thought of that... but it's better than writing something you KNOW exposes you to more risk when it doesn't have to. And more importantly... at this point I haven't wasted any time or effort writing code that would have to be significantly refactored... which means its SAVES us money.

At this point all the 'architecture' stuff is worked out and I can begin coding. The first thing I do is check out the base master sources from our Subversion source control repository using TortoiseSVN (Screenshot). I really like SVN, but I must admit I have started to look elsewhere for other solutions. I have found tagging and branching to be much more difficult to understand/follow than CVS or more commercial tools like Perforce. Economic constraints on new tools plus the added risk of disrupting the development process has had me stay away from making this decision lately... but I will need to consider it in more depth in the future. Not sure how I am going to handle it. I might pick up the Pragmatic Version Control using Subversion book, and see if it can clear up discrepencies I have found in working with the tagging.

At this point the development process branches off depending on what I am writing. If I am writing kernelmode code I will typically open up a special DDK console shell which has a lot of special tools I have written to handle building kernel code in C and inline ASM for Windows. If I am writing usermode code I do so using Visual Studio .NET 2003 Enterprise Architect as I typically write everything in C#. That's right, all our UI is done in .NET standalone forms now.

Once said feature has been written, that feature gets added to a bunch of different things:

  • A new category item is added in FogBugz, our Defect Tracking and Bug Reporting System
  • A new <target> is added to our nant build scripts. If this is a component of the kernel mode code... it gets added the the makefile, which the target will call.
  • Depending on what the feature is, it gets added to a parent <target> as a <call> component in nant... allowing it to be built in our automated build environment
  • The code and new build scripts are checked back in to SVN source control as a child of the product repository

If the code is usermode it will go through some other things as well:

  • It will go through an FxCop code audit
  • It will be added to our nant <target> for obfuscation, in which we use XenoCode 2005

If it is a kernel mode compoent it will also go through:

  • A static code analysis through PreFast
  • An IOStress test (about 3gigs of tests from Microsoft's LAB) with Driver Verfier turned on
  • An IOStress test in normal operation

Once everything passes all these checks, then its ready to be placed in an installer package for testing. I use an installer called InstallAware from MimarSinan International which I would highly recommend if it wasn't for the unprofessional customer service I have received. I am not linking to them as I do not want to give them any more 'google juice' than they already have. I am currently seeking other alternatives, but have had a lot of difficulty due to the specific needs I need for driver installation. I have been testing WiX but I can't say that I would put this in a production environment... yet.

The automated build system will then, depending on the target (standalone exe or a CD) build the installer exe and then the ISO. We actually use cygwin and the UNIX mkisofs utility to build the ISO images.

At this point we have an ISO image we can use for testing. We are starting to explore the automation of some testing using NUnit and NUnitForms. I wish I could say this was already rolled out in a production environment, but truth be told I have just been to busy to do that with these unit tests. I KNOW I need to do this.. but I have only written a few unit tests to date as I slowly integrate the testing as I fix bugs.

Seems like a lot of tools... but the reality is that they have SIGNIFICANTLY increased my productivity and the quality of the codebase... since most of this is all automated. As an example... through nant I can do things like:

  • nant CD - Builds everything (both user and kernel code), run through all audit tests, obfuscate, build installer, build ISO, update RSS feed with new version and copy to test server
  • nant UI - Builds just the UI and obfuscates it... allowing for debugging on local machine
  • nant debug drivers - Just build the kernel drivers in debug mode
  • nant componentname - Just build a specific target component. Quick and easy way to build something without having to open the IDE and prepping for a compile

The list could go on for quite some time... as there are shortcuts to pretty much build in any step. The interesting thing though is that for simplicity... I rarely use more than just a few of those since the scheduled build does it all for me already. I do some local machine coding stuff, then a full build to make sure I don't break anything... and leave the rest for the servers to deal with.

What's really interesting is that a LOT of these tools are pretty cost effective (ie: cheap). I didn't have to roll out $10,000+ for each machine to get the environment to work. And the tools that do cost money are worth their weight in gold (hold that to InstallAware, which I am still debating)

Well there you have it. I run at a 12 on the Joel test all the time thanks to these tools. And it makes the software development process that much better. Would love to know how others are managing their process. Feel free to link back to this entry so I can read how you do it!

Posted by SilverStr at February 12, 2005 11:44 AM | TrackBack

If you do much mind mapping in general, I would suggest taking at look at MindManager X5 from Mindjet (http://www.mindjet.com). I use it extensively and have found it to be an extremely useful tool for capturing thoughts and ideas, and translating them to documents and presentations.

Posted by: Jason at February 13, 2005 11:51 AM

Ya, I was looking at MindJet with it's TabletPC support.

Good to hear others are using it. Thanks for the link!

Posted by: SilverStr at February 14, 2005 09:17 AM

I don't know how nant friendly it is, but InnoSetup is a lovely free installer I use for some internal projects. http://www.jrsoftware.org/isinfo.php. It's not Windows Installer based but a stand-alone exe made from Delphi. There's a GUI tool included in the quickstart pack that I can't live without (too lazy to build the scripts myself, though they're just text files). Since it's free, it may not be a bad idea to give them a try. There may be something on the net about getting nant to work with InnoSetup since a good bit of the applications I have use it as their installer.

I stick to free versions so I tend to stick to things like InnoSetup, WiX or use straight up Windows Installer tools like Orca. Hell you're using VS.NET which has a pretty decent installer, why not go with that? The only downside I think is that it's mostly GUI based and hard to automate though it's been a while since I played with it.

Posted by: Jeremy Brayton at February 16, 2005 10:14 PM