February 12, 2005
Behind the Scenes
Over the last while, I have had a few of requests to share more information of how I develop my software. Today I noticed that Nick Bradbury (author of HomeSite, TopStyle and FeedDemon) did just that, as well as Adam Stiles (author of NetCaptor). So I figured why not jump on the bandwagon and do the same thing.
This is probably only of interest to fellow developers, but if you would like a glimpse at the software development lifecycle that I use, keep reading.
Everything starts with a MindMap. Any time a new feature is to be added or a new component is to be written... it starts with the MindMap. I use Microsoft's Visio product to do that, since it has built in templates for that. For those that don't know, mind mapping is a powerful way to graphically depict one's thoughts and views on a subject. In this case, we can break any sort of "feature" down into it's basic components and it's interactions with other systems. I then use that for a later stage, which is data flow diagramming.
At this point, depending on the depth of the feature I may write a functional design specification in Microsoft Word using a design template I use in house. This isn't always the case... as we have found that our MindMaps are getting better and better to define feature AND function. When I do write the spec, its short and sweet. There are mock ups of how it should function, and features are linked back to the MindMap. No use duplicating efforts when its not needed. Then I move on to the data flow diagrams once I understand how the feature needs to function.
Again, I use Visio and its templates for that. I can build a data flow diagram (DFD) to provide a visual representative of how the feature will interact with the system and process data. This allows the feature to be later threat modelled and already have that step worked out.
With such a small team threat modelling isn't as effective as I would like. As such, the process I use for threat modelling isn't as structured as I would have it for a larger team where we could have more auditing of the process. Once the feature is identified, broken down into its smallest components and then diagrammed for its data flow, I can identify the areas of most risk and focus on looking at the threats there. Once I have ranked risk using the standard infosec risk formula (RISK = damange/chance, I am still not using DREAD analysis here) I then fill out an editable PDF I created which is a Threat Model template for answering common questions, and applying STRIDE fundamentals to any defined risk. I have been considering using Frank Swiderski's Threat Modeling Tool but I just haven't had a chance to get around to applying it to a production environment. I have played with the tool, but don't use it on a daily basis.
In areas of what I call 'risk depth', I will then build an attack tree in Visio and/or Word (depending on the depth) to ensure those risks are mitigated. With limited time I found that this step can much more clearly show risks, and more importantly show what safeguards (if any) will be put in place to mitigate them. This usually gives me a glimpse of some design decisions I probably forgot about when I was originally mind mapping. It hasn't been uncommon to come across a basic component of a feature that is riddled with potential holes that I go back to the map and decide if the feature is actually needed, or other ways to approach it. I am sure some people would *shudder* at the thought of that... but it's better than writing something you KNOW exposes you to more risk when it doesn't have to. And more importantly... at this point I haven't wasted any time or effort writing code that would have to be significantly refactored... which means its SAVES us money.
At this point all the 'architecture' stuff is worked out and I can begin coding. The first thing I do is check out the base master sources from our Subversion source control repository using TortoiseSVN (Screenshot). I really like SVN, but I must admit I have started to look elsewhere for other solutions. I have found tagging and branching to be much more difficult to understand/follow than CVS or more commercial tools like Perforce. Economic constraints on new tools plus the added risk of disrupting the development process has had me stay away from making this decision lately... but I will need to consider it in more depth in the future. Not sure how I am going to handle it. I might pick up the Pragmatic Version Control using Subversion book, and see if it can clear up discrepencies I have found in working with the tagging.
Once said feature has been written, that feature gets added to a bunch of different things:
If the code is usermode it will go through some other things as well:
If it is a kernel mode compoent it will also go through:
Once everything passes all these checks, then its ready to be placed in an installer package for testing. I use an installer called InstallAware from MimarSinan International which I would highly recommend if it wasn't for the unprofessional customer service I have received. I am not linking to them as I do not want to give them any more 'google juice' than they already have. I am currently seeking other alternatives, but have had a lot of difficulty due to the specific needs I need for driver installation. I have been testing WiX but I can't say that I would put this in a production environment... yet.
The automated build system will then, depending on the target (standalone exe or a CD) build the installer exe and then the ISO. We actually use cygwin and the UNIX mkisofs utility to build the ISO images.
At this point we have an ISO image we can use for testing. We are starting to explore the automation of some testing using NUnit and NUnitForms. I wish I could say this was already rolled out in a production environment, but truth be told I have just been to busy to do that with these unit tests. I KNOW I need to do this.. but I have only written a few unit tests to date as I slowly integrate the testing as I fix bugs.
Seems like a lot of tools... but the reality is that they have SIGNIFICANTLY increased my productivity and the quality of the codebase... since most of this is all automated. As an example... through nant I can do things like:
The list could go on for quite some time... as there are shortcuts to pretty much build in any step. The interesting thing though is that for simplicity... I rarely use more than just a few of those since the scheduled build does it all for me already. I do some local machine coding stuff, then a full build to make sure I don't break anything... and leave the rest for the servers to deal with.
What's really interesting is that a LOT of these tools are pretty cost effective (ie: cheap). I didn't have to roll out $10,000+ for each machine to get the environment to work. And the tools that do cost money are worth their weight in gold (hold that to InstallAware, which I am still debating)
Well there you have it. I run at a 12 on the Joel test all the time thanks to these tools. And it makes the software development process that much better. Would love to know how others are managing their process. Feel free to link back to this entry so I can read how you do it!Posted by SilverStr at February 12, 2005 11:44 AM | TrackBack