February 27, 2005

More About Me

Recently I have seen a couple of trackbacks that have people talking about me, and some of my posts. There has been some confusion on what I do for a living, some thinking I am a security consultant and some thinking I am a programmer.

I am kinda both. I am a computer security software architect, and I own my own company that writes computer security software. Imagine if a CISSP mated with a secure software engineer.... the result would be the birth of me! Of course, like the regular blacksheep in any family... something had to be different. That would have to be the entrepreneur streak in me that drives everything I do.

My current stuff is based on the Windows platform, since that is where a lot of 'pain' exists for my corporate clients. I do some security consulting and write specific software for our clients, but this has really been a stop gap to bootstrap the company as I build some commercial off the shelf (COTS) software to handle some specific weaknesses I have identified on the Windows platform.

Does that mean I hate Unix now? Not at all. Do I think Windows sucks? Not at all. It's about the right tool for the right job... and anything can be made secure... just as anything can be made INSECURE. Having the luxury of an infosec background while at the same time being a developer, it gives me a unique perspective on things.

So there you have it. I hope I cleared up the confusion.

If you want to talk to me, fee free to drop me an email. Or, why not Skype me? You can search for me on Skype as "Dana Epp".

Posted by SilverStr at 01:21 PM | Comments (1) | TrackBack

February 22, 2005

Guerrilla Threat Modelling

Peter Torr has done it again. He has written an EXCELLENT article on writing a practical threat model... getting rid of the cruft of useless theory and applying real-world experience to how to get it done. If you are part of a team that needs a no nonsense approach to threat modeling, you should read his article on "Guerrilla Threat Modelling". Well worth the investment in time.

Peter, a suggestion. Follow up this article with another one actually writing attack trees. Then I can point people to your two articles instead of constantly having to explain this to them. :)

Posted by SilverStr at 10:04 PM | Comments (2) | TrackBack

February 21, 2005

PuTTY Vulnerabilities - Two Integer Overflows... patch now

PuTTY 0.57, released yesterday, fixes two security holes which can allow a malicious SFTP server to execute code of its choice on a PSCP or PSFTP client connecting to it. It is recommended that everybody upgrade to 0.57 as soon as possible.

You can download it here.

Posted by SilverStr at 07:40 AM | TrackBack

February 20, 2005

Looking for info on Automated Functional Testing Tools

I am currently in the process of looking at the idea of building a more robust automated functional testing suite to go along with our formal test plan, and have come to the realization that there are a LOT of tools out there. I am hoping some of my readers may have some insight into the tools that they use on a regular basis.

Right now I am looking at Mercury Interactive's QuickTest Professional, but having no luck in finding out how much it costs for the tool with the .NET Winform plugin. It doesn't bode well for the cost when they don't wish to openly publish it. As a small ISV I need a cost-effective solution that doesn't hurt the pocket book, but doesn't kill my QA guys either.

Here is what I THINK I need in a functional testing tool:

  • Easy Visual IDE to build test scripts without much programming knowledge
  • Supports Winforms (.NET 1.1)
  • Automated runs using cmd line tools
  • Works on W2KSP4, XPSP2, Srv03
  • Light weight tool... no bulky installs
  • Support custom actions etc (VBscript/VB fine)
  • Good documented support and support forums
  • Some sort of automated notification of success or failure of an automated test session (ie: Email)
  • Affordable to the small ISV.

I envision the ability for a QA guy to be able to visually point and click common functional tasks and record how a feature should work, and have the tool convert that into the test script. Then this script can be checked in to Subversion under a special QA tree for either "Daily" or "Weekly" runs. The Daily scripts would be for critical show stopper type functional tests, where as the Weekly scripts would include the Daily scripts and non-critical scripts that we build. Then by automatically checking out the scripts from SVN onto a test machine in the Lab... the environment should be able to run the tests immediately as desired.

The process would also allow us to repro bugs reported to the triage area of our Defect Tracking System (FogBugz) and then allow us to maintain regression tests against the known failed behaviour and ensure it gets fixed... and stay fixed.

All sounds simple enough... but I am unsure what is the right tool for this job. Anyone have any insight? Anyone know what the test framework is that Microsoft uses with Maddog? Anyone have a documented process they may have blogged about that includes this information?

Please feel free to comment here, or send me an email to dana@vulscan.com. Would love to hear your feedback!

Posted by SilverStr at 09:48 AM | Comments (4) | TrackBack

February 18, 2005

Remote Windows Kernel Exploitation - Step Into the Ring 0

Barnaby Jack, a research analyst at EEye Digital Security, wrote a very interesting article about kernel exploitation techniques in his paper Remote Windows Kernel Exploitation.

As someone who works at ring 0 with the Windows kernel daily... this really isn't something new. If you read Exploiting Software: How to Break Code (my book review here) there were samples on how to write rootkits and even how to turn off the entire security model with a binary patch in Windows in just a few lines of code. Heck it's actually only a few bytes to turn off the entire thing. Greg Hogland has been working on rootkits for years and discussing this.

Anyways, although this is old news to me I think most of you will still find it interesting. You can read the paper here.

Happy reading!

Posted by SilverStr at 11:34 AM | TrackBack

February 16, 2005

Quote on Designing Software

A GREAT quote by Jamie Zawinski that I found in a post where he wrote about software development and the reason writing groupware is bad:

If you want to do something that's going to change the world, build software that people want to use instead of software that managers want to buy.
Posted by SilverStr at 05:38 PM | TrackBack

February 12, 2005

Behind the Scenes

Over the last while, I have had a few of requests to share more information of how I develop my software. Today I noticed that Nick Bradbury (author of HomeSite, TopStyle and FeedDemon) did just that, as well as Adam Stiles (author of NetCaptor). So I figured why not jump on the bandwagon and do the same thing.

This is probably only of interest to fellow developers, but if you would like a glimpse at the software development lifecycle that I use, keep reading.

Everything starts with a MindMap. Any time a new feature is to be added or a new component is to be written... it starts with the MindMap. I use Microsoft's Visio product to do that, since it has built in templates for that. For those that don't know, mind mapping is a powerful way to graphically depict one's thoughts and views on a subject. In this case, we can break any sort of "feature" down into it's basic components and it's interactions with other systems. I then use that for a later stage, which is data flow diagramming.

At this point, depending on the depth of the feature I may write a functional design specification in Microsoft Word using a design template I use in house. This isn't always the case... as we have found that our MindMaps are getting better and better to define feature AND function. When I do write the spec, its short and sweet. There are mock ups of how it should function, and features are linked back to the MindMap. No use duplicating efforts when its not needed. Then I move on to the data flow diagrams once I understand how the feature needs to function.

Again, I use Visio and its templates for that. I can build a data flow diagram (DFD) to provide a visual representative of how the feature will interact with the system and process data. This allows the feature to be later threat modelled and already have that step worked out.

With such a small team threat modelling isn't as effective as I would like. As such, the process I use for threat modelling isn't as structured as I would have it for a larger team where we could have more auditing of the process. Once the feature is identified, broken down into its smallest components and then diagrammed for its data flow, I can identify the areas of most risk and focus on looking at the threats there. Once I have ranked risk using the standard infosec risk formula (RISK = damange/chance, I am still not using DREAD analysis here) I then fill out an editable PDF I created which is a Threat Model template for answering common questions, and applying STRIDE fundamentals to any defined risk. I have been considering using Frank Swiderski's Threat Modeling Tool but I just haven't had a chance to get around to applying it to a production environment. I have played with the tool, but don't use it on a daily basis.

In areas of what I call 'risk depth', I will then build an attack tree in Visio and/or Word (depending on the depth) to ensure those risks are mitigated. With limited time I found that this step can much more clearly show risks, and more importantly show what safeguards (if any) will be put in place to mitigate them. This usually gives me a glimpse of some design decisions I probably forgot about when I was originally mind mapping. It hasn't been uncommon to come across a basic component of a feature that is riddled with potential holes that I go back to the map and decide if the feature is actually needed, or other ways to approach it. I am sure some people would *shudder* at the thought of that... but it's better than writing something you KNOW exposes you to more risk when it doesn't have to. And more importantly... at this point I haven't wasted any time or effort writing code that would have to be significantly refactored... which means its SAVES us money.

At this point all the 'architecture' stuff is worked out and I can begin coding. The first thing I do is check out the base master sources from our Subversion source control repository using TortoiseSVN (Screenshot). I really like SVN, but I must admit I have started to look elsewhere for other solutions. I have found tagging and branching to be much more difficult to understand/follow than CVS or more commercial tools like Perforce. Economic constraints on new tools plus the added risk of disrupting the development process has had me stay away from making this decision lately... but I will need to consider it in more depth in the future. Not sure how I am going to handle it. I might pick up the Pragmatic Version Control using Subversion book, and see if it can clear up discrepencies I have found in working with the tagging.

At this point the development process branches off depending on what I am writing. If I am writing kernelmode code I will typically open up a special DDK console shell which has a lot of special tools I have written to handle building kernel code in C and inline ASM for Windows. If I am writing usermode code I do so using Visual Studio .NET 2003 Enterprise Architect as I typically write everything in C#. That's right, all our UI is done in .NET standalone forms now.

Once said feature has been written, that feature gets added to a bunch of different things:

  • A new category item is added in FogBugz, our Defect Tracking and Bug Reporting System
  • A new <target> is added to our nant build scripts. If this is a component of the kernel mode code... it gets added the the makefile, which the target will call.
  • Depending on what the feature is, it gets added to a parent <target> as a <call> component in nant... allowing it to be built in our automated build environment
  • The code and new build scripts are checked back in to SVN source control as a child of the product repository

If the code is usermode it will go through some other things as well:

  • It will go through an FxCop code audit
  • It will be added to our nant <target> for obfuscation, in which we use XenoCode 2005

If it is a kernel mode compoent it will also go through:

  • A static code analysis through PreFast
  • An IOStress test (about 3gigs of tests from Microsoft's LAB) with Driver Verfier turned on
  • An IOStress test in normal operation

Once everything passes all these checks, then its ready to be placed in an installer package for testing. I use an installer called InstallAware from MimarSinan International which I would highly recommend if it wasn't for the unprofessional customer service I have received. I am not linking to them as I do not want to give them any more 'google juice' than they already have. I am currently seeking other alternatives, but have had a lot of difficulty due to the specific needs I need for driver installation. I have been testing WiX but I can't say that I would put this in a production environment... yet.

The automated build system will then, depending on the target (standalone exe or a CD) build the installer exe and then the ISO. We actually use cygwin and the UNIX mkisofs utility to build the ISO images.

At this point we have an ISO image we can use for testing. We are starting to explore the automation of some testing using NUnit and NUnitForms. I wish I could say this was already rolled out in a production environment, but truth be told I have just been to busy to do that with these unit tests. I KNOW I need to do this.. but I have only written a few unit tests to date as I slowly integrate the testing as I fix bugs.

Seems like a lot of tools... but the reality is that they have SIGNIFICANTLY increased my productivity and the quality of the codebase... since most of this is all automated. As an example... through nant I can do things like:

  • nant CD - Builds everything (both user and kernel code), run through all audit tests, obfuscate, build installer, build ISO, update RSS feed with new version and copy to test server
  • nant UI - Builds just the UI and obfuscates it... allowing for debugging on local machine
  • nant debug drivers - Just build the kernel drivers in debug mode
  • nant componentname - Just build a specific target component. Quick and easy way to build something without having to open the IDE and prepping for a compile

The list could go on for quite some time... as there are shortcuts to pretty much build in any step. The interesting thing though is that for simplicity... I rarely use more than just a few of those since the scheduled build does it all for me already. I do some local machine coding stuff, then a full build to make sure I don't break anything... and leave the rest for the servers to deal with.

What's really interesting is that a LOT of these tools are pretty cost effective (ie: cheap). I didn't have to roll out $10,000+ for each machine to get the environment to work. And the tools that do cost money are worth their weight in gold (hold that to InstallAware, which I am still debating)

Well there you have it. I run at a 12 on the Joel test all the time thanks to these tools. And it makes the software development process that much better. Would love to know how others are managing their process. Feel free to link back to this entry so I can read how you do it!

Posted by SilverStr at 11:44 AM | Comments (3) | TrackBack

February 10, 2005

Limited Admin Rights on Windows XP with Blank Password

Here is something I learned from Aaron Margosis today that I think would be of interest to a lot of you. If you run with least privilege your probably have set up an "Admin" account with Administrator's right and your normal account with Limited User rights.

Now here is something I learned today that floored me. Since the introduction of Windows XP, a blank password is actually MORE SECURE for certain scenarios than a weak password. By default, any account that has a blank password can only be used for log on at the console. You cannot get network access, and it cannot be used with "Run As". Isn't that interesting. A middle ground for the mom and pop home machines that don't use passwords anyways.. and want to limit their normal user access rights.

There are two problems with this approach that I see though. You have to trust everyone who has physical access to the computer... which is something I cannot do for my TabletPC (especially when on the road) or office machines. Secondly... as a normal user I hate having to CTRL-L to user switch and log on if I want to do anything 'adminy' (is that even a word???). I like runas, and you just can't take that away from me.

But if you are ok with those constraints... party on with an Admin account with a blank password that only allows console login.

Posted by SilverStr at 04:01 PM | Comments (1) | TrackBack

Even security software gets attacked

In case you haven't heard, in the last week or so a bunch of security software has been found to be vulnerable to attack. First EWeek reported that a new trojan was targeting Microsoft's AntiSpyware Beta. Sophos reports that the trojan includes a keylogger and attempts to steal credit card details, turn off other anti-virus applications, delete files, install other malicious code and download code from the Internet. All the ugly stuff you wouldn't want to have happen.

Then it was found that a major flaw exists in most Symantec products offering high-risk vulnerability and warned that a successful exploit could lead to code execution attacks.

Then most recently ISS found that F-Secure Anti-Virus, F-Secure Internet Gatekeeper and F-Secure Internet Security are vulnerable to a buffer overflow, caused by improper bounds checking when handling ARJ archives.

Look, vulnerabilities are inevitable. They will happen in software, including security software. Security software != secure software, and you need to remember that. On top of that, I don't think its fair to assume that just because flaws are detected that you should assume the product doesn't do what it says it does.

When I look at how Symantec handled its issue, I was initially frustrated with the fact they had a vulnerability in something they were not even using anymore. But that quickly turned around to respect as their response to the problem was to simply remove it... one of the 4 things you can do when you find a threat like this. (If you don't know what I am talking about... you need to get the Microsoft Press book on Threat Modelling)

FSecure was quick to fix their problem, and they should be credited for that as well. In fact I was impressed with how quickly they came out with the fix. If anything, my only disappointment would be in the fact they were not more transparent in how they dealt with it. One of my favorite blogs is the FSecure Blog. Although its written by staff in their lab... I notice they had no problem commenting on flaws in Microsoft products... but not their own. I have come to enjoy and respect their feed and would have expected them to be more open about their own issue through their blog once they released the fix. Instead they simply released an advisory and left it at that.

All and all, no software is immune to attack. How resilient it is in the face of those attacks is a different matter. And I think these guys did a good job in handling it. Of course trojans that turn off antispyware are much harder to defend against... which is why you should be running with least privilege in a method to reduce the attack surface potential of such hostile code... eliminating the ability to copy such malicious intent to system directories.

But thats just me.

UPDATE: As Xavier Ashe has pointed out, FSecure has responded and posted a quick entry on the vulnerability in their stuff. Good show.

Posted by SilverStr at 02:51 PM | Comments (2) | TrackBack

February 08, 2005

High level Threat Modelling

Peter Torr has an interesting article about high level threat modeling.

The gist of his article is that the process consists of six (possibly repeated) steps, outlined below in more detail:

  1. Preparation
  2. Brainstorming
  3. Drafting
  4. Review
  5. Verification
  6. Closure
I highly recommend you go read his article to dig into the depth of each step.

Good job Peter.

Posted by SilverStr at 08:39 AM | Comments (4) | TrackBack

February 03, 2005

When security fails to be effective...

As Bruce Schneier says, it doesn't matter what kind of security you implement if it's easy to get around.

Want proof?

Posted by SilverStr at 03:23 PM | Comments (1) | TrackBack