June 29, 2004

Guidance for Securing Microsoft Windows XP Systems for IT Professionals: A NIST Security Configuration Checklist

NIST has released Special Publication 800-68 to assist IT professionals (particularly Windows XP system administrators and information security personnel) in effectively securing Windows XP systems. It discusses Windows XP and various application security settings in technical detail.

The guide provides insight into the threats and security controls that are relevant for various operational environments, such as for a large enterprise or a home office. It describes the need to document, implement, and test security controls, as well as to monitor and maintain systems on an ongoing basis. It presents an overview of the security components offered by Windows XP and provides guidance on installing, backing up, and patching Windows XP systems. It discusses security policy configuration, provides an overview of the settings in the accompanying NIST security templates, and discusses how to apply additional security settings that are not included in the NIST security templates.

It demonstrates securing popular office productivity applications, Web browsers, e-mail clients, personal firewalls, antivirus software, and spyware detection and removal utilities on Windows XP systems to provide protection against viruses, worms, Trojan horses, and other types of malicious code. This list is not intended to be a complete list of applications to install on Windows XP system, nor does it imply NIST's endorsement of particular commercial off-the-shelf (COTS) products.

You can download the documents here.

Posted by SilverStr at 03:07 PM | TrackBack

June 28, 2004

Security: The root of the problem according to Marcus Ranum

ACM Queue has an interesting article written by Marcus on Security: The root of the problem.

In the article, Marcus explores why it is that we can't seem to produce secure, high quality code. From his perspective, one distressing aspect of software security is that we fundamentally don't seem to "get it." We keep trying to teach programmers how to write more secure code and we seem to be failing miserably at the task.

Can't say that I disagree with this. Think about it.... just how long have buffer overflows plagued our industry? How many times are people shown how to code defensively, only to have the practice thrown by the wayside during tight deadlines?

Anyways, the article is an interesting read. Enjoy!

Posted by SilverStr at 02:48 PM | TrackBack

June 25, 2004

Microsoft, You’re not setting a very good example. I am disappointed.

I know I am going to get myself in trouble for this... and will probably be banned from the Microsoft campus, but I saw a post by a Microsoft employee and felt compelled to respond.

I am taking Aaron Margosis to task and following his suggestion. In his post he says:

Customers: if you see any MS sales, MCS, Premier, PSS, etc., doing web or email as admin, please tell them, “You’re not setting a very good example. I am disappointed.”

How about PowerPoint? How about Word? How about demos of stuff not needing to be run as admin? How about running a remote desktop? I saw all of these when I was at Microsoft.

When I was walking through the trustworthy computer fest last week at Microsoft I stopped at NINE machines that Microsoft employees were using, and all nine were logged on as administrator. 9 for 9 were NOT running with least privilege. But thats not the frustrating part. This was a SECURITY RELATED computer fest. You would think that this crowd would be much more aware and focused on such things.

Combine that and the recent fact I found out that in the latest RC of XP SP2 you no longer can use "runas" on your Windows Update right out of the box... and I see serious problems on the Microsoft campus. It seems many don't wish to eat their own dog food.

Microsoft, You’re not setting a very good example. And I am disappointed.

Posted by SilverStr at 03:30 PM | Comments (11) | TrackBack

June 23, 2004

Threat Modeling book now available??

Well now... Michael reports that he got his hands on a copy of Frank Swiderski and Window Snyder's Threat Modeling recently. Anyone else able to get it? Amazon isn't showing it available yet.

I have been waiting for this book for some time. Come on amazon... take my money!!!!!

Posted by SilverStr at 02:17 PM | Comments (2) | TrackBack

Running with Least Privilege on Windows

Aaron Margosis pointed out to me today his weblog which contains some good references and information about running as a limited user on Windows. He had an interesting comment on my different credentials post I did almost a year ago in which he uses a shortcut directly to a cmd window with runas instead of using the explorer view. His suggestion is to do:

C:\WINDOWS\system32\runas.exe /u:Administrator "%windir%\System32\cmd.exe /k cd c:\ && color fc && title ***** Admin console *****"

Makes total sense if you want to use an admin shell. Personally I prefer having the UI available through the explorer view... which has its own limits. This way I don't have to try to remember where the CPL paths are, or what they are called. I just click 'My Computer' and then 'Control Panel' and have at 'er. To each his own. Aaron has some good pointers on his blog about different ways of approaching this. Consider checking it out.

Posted by SilverStr at 09:31 AM | TrackBack

June 22, 2004

Afternoon at the Microsoft Security Summit

After a great box lunch I attended the Implimenting Advanced Server and Client Security session, put on by Steve Riley at Microsoft. There were a few interesting 'take aways' for this session.

First was a point I have made for some time. I think it shocked the room when Steve said it was bullsh*t to disable your broadcast SSID in your access points. It was useless anyways... all association request messages are broadcast clear text anyways. If you sniff long enough you will see legit traffic associating, giving you the SSID anyways. By enabling the SSID though... you allow the wireless access tools in Windows XP to 'just work'.

I also found out that later in the year Microsoft will be releasing Microsoft Audit Collection Services (MACS), which is basically the same functionality as what Unix has had forever with syslog. Neat difference is how it is designed to import directly to a data source like SQL server. This is nice; it is about damn time.

I have been wrestling with Active Directory stuff as of late, and I enjoyed Steve's 30 second AD structure. Some organizations take weeks, months even years as they try to organize an Active Directory structure that fits in with the politics of the organization. Steve gives us a quick way to deal with it:

  • Forests and Domains = Physical geography
  • Organizational Units = Administrative Model
  • Security and Distribution = Organizational Chart

Yep... it's that simple.

Basically Steve wrapped everything up into 4 bullet points (even though he had over 120 slides for a 90 minute presentation *shudder*)

  1. Authenticate everywhere
  2. Validate always
  3. Authorize and audit everything
  4. Encrypt when necessary

I concurr.

After this presentation I decided to head back to the developer track and sit in on Implimenting Application Security using the Microsoft .NET Framework. Great presentation by the same fellow who did the morning session.

Nothing really new here, except I did learn about the checked keyword that I never knew existed. It allows you to do arithmetic overflow checks in your code. If it crosses a boundary and overflows, it will trigger a System.OverflowException. Never knew that before.

The demo for role-based security code blocks through imperitive and declaritive security were neat. I use WindowsIdentity and WindowsPrincipal a lot but didn't realize you could build your own with GenericIdentity and GenericPrincipal. I will have to look into it. Of course, lately I have been doing more declaritive access control by using PrincipalPermission attributes. ie:


[PrincipalPermission(SecurityAction.Demand, Authenticated=true, Role = "Administrators")]
public class PrivilegedCode
{
...
}

Works great, and constricts code by access control on each method.

With that session done, the Security Summit was over. It was time to head back to Canada. Thanks for a great time Seattle.

Only wish I had enough time to do some kayaking on Lake Union. Hey, maybe if everyone takes Steve's advice of opening the SSID next time I am down... I might even go War Kayaking and beat Phillip's mapping.

Posted by SilverStr at 08:10 PM | TrackBack

Morning at the Microsoft Security Summit

Coming down to the security summit I was hoping to really gain some good insight on Microsoft's security stance. I appreciate learning more whenever I can and thought it would be well worth the investment in time.

The keynote reinforced everything coming out of Microsoft over the last year. Andy Lees, the VP of Server Tools provided a good foundation for people who might not know what Microsoft has been up to. (Why wasn't Ballmar up there yelling "Security, Security, Security, Security" ???) Unfortunately, there was nothing new here for me. Maybe being to close to the ground I have heard it so much before that I lost the benefit of the keynote. The demo of XPSP2 was basically the same one from the RSA conference, and since I am running it already on my laptop I have already used everything presented. It was interesting to see more on the domain side of things to use group policies for the Windows firewall, so I did get something out of it.

If you attended the security webcasts over the past year you didn't need to come to the first session. The first one is on the Introduction of Application Security and is the same presentation that is on the security webcast I blogged about back in February. As a Level 200 session, I realize that my time can be better utilized elsewhere. The presenter is engaging, and there is much you can learn if this is new to you... but this got boring fast for me. I want to leave, but I am jammed in a crowded room which makes it difficult. I also don't wish to show any disrespect by interrupting the process and getting up, especially since it is an otherwise good presentation. I have to say I was floored when the presenter stated that he doesn't know how to make Explorer run as a different user, forcing him to log off as a normal user and jump to an administrator account to do a bit of work. I will have to go show him how to make a shortcut to iexplore.exe and set the "Run with different Credentials" box to do just that. (Update: He was very thankful that I showed that tip)

Making my time here useful, I am going through the conference materials; I notice Microsoft included a great security resource kit in the package. Going through it I can see a lot of interesting whitepapers, how-to's and supporting guidance information which I have posted about before. Nice to have that all in one package.

I think I am going to break out of the developer track and go over to the IT Level 300 track in the next session. It might be more challenging, and give me some new content to learn about.

Actually, this session just ended... so lets jump over to the IT Level 300 track now...


... OMG. I am in an awesome session on Implementing Application and Data Security that is being presented by Steve Riley, the Product Manager for the Security Business & Technology Unit at Microsoft. This guy is amazing. He is so engaging and knowledgeable on the topic it is quite refreshing from the earlier session. Not only am I learning how to better secure Exchange, he is showing compelling reasons why the new ISA Server 2004 makes sense for me. Isolating out OWA away from the perimeter DMZ and slamming an ISA 2004 box to deal with the authentication before it even hits OWA, I can see the benefits of reducing the attack surface by cleansing the input at ISA instead of relying on IIS, which I don't have a lot of trust in. The deployment scenarios he has shown is really interesting; I will have to follow with him about this offline.

I am just floored at the rights management services (RMS). This isn't the DRM you are used to hearing about. Steve has shown some neat ways to use RMS within an organization, from time-basing documents to authorizing who can print or forward an email. I think they have a ways to go yet in dealing with it offline (especially for stand alone files), but it looks promising. Seeing some of the concept videos for Longhorn, I can see how this will be more closely coupled into the secured environment of the future.

I already reached my ROI on the trip last night when I got to see Team System. This was icing on the cake! Speaking of cake, its time for lunch. Hopefully the afternoon will be as useful.

Posted by SilverStr at 12:55 PM | TrackBack

Team System Testing: Microsoft just might have it right

The history of Microsoft dev tools is riddled with folk lore about how Microsoft rarely communicates with developers until it is way too late. Peter Provost's petition is an example of how developers push back in an effort to get Microsoft to listen when they seem to fall out of line. I had a refreshing experience at the geek dinner that shows that is not always the case. First off, Jason and Tom came out in RESPONSE of those concerns and to provide rebuttal on Microsoft's view to us. More importantly though was that in an effort to hear our thoughts in the community on this, they came to listen to our customer feedback on Team System. I really appreciated that. No one told them to come to the dinner. No one asked them to give their perspective in such a forum. They just did it. Having the opportunity to discuss WHY we feel this way about the unit testing was nice, and it was the right way to address customer concerns early in the process. During the dinner Jason was continuously challenged on the decision of not including unit testing in the base dev tools. Different people had different views, and I think we all were focused on only a small aspect of what Jason was offering in rebuttal. But a core 'challenge' theme continued to drive the discussion.

So what was Jason challenged on at the geek dinner? Explain to us why Microsoft decided to integrate everything so tightly together. With the gauntlet down he responded... in a way that no one would have expected. He decided to take us back to the Microsoft campus, found us a conference room in Building 41 and gave us a demo on the daily build. This demo was apparently even deeper than TechEd, which was cool.

Robert was tasked with seeing about getting some footage for Channel9 at the geek dinner, and when Jason offered to take us to see Team System I made sure Robert had the opportunity to come and record it. He got over two hours of video so hopefully some of that will make it up on MSDN in the next few weeks. I also got an opportunity to grab some stills of the demo on my digital camera, which I have uploaded to my Gallery. Unfortunately most of them didn’t turn out well. Let's hope Robert's shots do better.

I wish I really could blog the experience as I saw it. The problem is I am sure others who were there got different 'take aways' than I did, leaving with an incomplete picture. As others blog about it I am sure Jason or Robert will link to their experiences to give you a more rounded experience as a whole. So make sure you check out their blogs over the next couple of days.

With that said, let's talk about my experience with the demo. I now understand why Microsoft coupled everything so tightly; the integration of the entire test suite considerably strengthens their offering as a heterogeneous solution that works in the existing IDE tools. There is no huge learning curve in adapting testing techniques into the tools you already use. They have reduced the complexity of this process which in turn should expose more developers to it, resulting in the potential of more use... ultimately creating higher quality code. Think about it... unit testing and test–driven development isn't the number one priority on a lot of dev teams, even though statistically it shows it will reduce costs and produce higher quality software against the normal software development life cycle. (Lets not argue on this point... there are far better people out there to fight with those stats than I) If this just becomes another part of the development process within the tools they are already familiar with, the barrier to entry for test integration is considerably lessened. I have a poor screenshot showing how tests get integrated directly into the solution; another shot shows how in the pane where properties normally exist how a new “Test View” tab allows you to quickly see and execute your tests. I even got a shot of the test results window which quickly can show you a pass/fail list of the tests that you run. Notice something important here… these are directly integrated into Visual Studio as just another component in the existing docking model; there is no further screen complexity or clutter to get the tools to you!!

I still hold to my signature on the petition of having some rudimentary unit testing in the entry level product for VS2005. It is definitely possible to separate out parts of the unit testing, but unless you see the bigger picture, you won't realize that what Microsoft is offering is actually much more, and is WORTH the investment in moving to the higher skews for Team System anyways. I think that having the basic unit testing framework in the entry level products could go a long way to covertly transfer the body of knowledge for unit testing to new or inexperienced developers, creating an upgrade path to the bigger system as they need it.

If you want separated unit testing, you can (and should) use NUnit now. (Remember Team System is still a ways off) NUnit is a great unit-testing framework and is available now. More importantly is that your work on the nunit tests won't be wasted/lost if you move to Team System later. Their unit testing is very similar to NUnit by using attribute based tags to set and deliver tests. Microsoft has also included a 'generic' test type which has the potential to allow you to map the NUnit style testing framework into Team System.

One feature I REALLY liked was the fact you can right click on a namespace, class or method, and generate the ENTIRE testing harness framework in a single pass. I am not sure if some of the people there realized how powerful that is. To be able to generate the entire harness without writing a single line of code does a lot to 'dumb' down the tedious parts to allow a developer to get right to the test code. This will go way further than preaching about why unit tests are good any day... since now you can just jump in and do it.

Am I making sense? Consider this: Most documentation on unit testing explain what its about, but rarely show a practical harness that can be used in the code you care about... your own master sources. Microsoft's approach immediately BUILDS that for you, alleviating a major hurdle in the unit testing process... allowing the developer to immediate get to the heart of the test code without worrying about how to integrate it. Of course there is still the learning curve on what and how to test... but this is a pretty major leap forward in the integration between unit tests and the master sources in your Visual Studio project/solution.

But it didn't end there. Unit testing has been around for ages, it's just that most people don't use it. Hopefully now that weakness can be resolved with Team Studio. Something that turned my crank more was the idea of test-driven development within Team Studio. Microsoft got this one right. And in a very neat way.

I was turned onto the idea of test-driven development when a few of the developers I managed at my last company bought me Extreme Programming Explained: Embrace Change. I tried to have an open mind, but at the time could not justify the migration to the XP development process past the idea of some pair programming and simple story/task cards. I should also state that I have not done any real test-driven development myself, mostly because I have been focused on learning other new development processes; well to be honest I have been to lazy to write the tests first… with the voice in the back of my mind echoing 'redundant effort… redundant effort'. Well, I think that will change in Team Studio. They have integrated awesome refactoring and test-driven development tasks directly into the system. My jaw dropped when they created a test and then GENERATED THE STUB for the code I would then need to write. Let me be clear here... Microsoft has taken the work out of it for you. You can write a basic test in the system, focusing on what you EXPECT to come in via params and what you will return etc... and Team Studio will then generate the methods within the class you are building. I tried to get a shot of the refactor right click menu, but it didn’t turn out very well.

Another feature which I REALLY liked was Microsoft's hooks for code coverage. They can analyze your testing harnesses and create a report showing you the code coverage from your tests. Not only can they should you by percentage of the code coverage in any section of code, they can VISUALLY show you the lines of code that are never reached. This is really important; you can graphically see when your tests are not properly covering all your code blocks. I have one shot where you can see by color code that a particular line is never hit, showing that the code execution path isn’t fully tested. I have another one that will even shows an entire block that was missed when an exception was ALWAYS occuring.

Taking a tangent here, I am sure what I am about to say will have some people disagreeing with me with venomous debate about testing techniques. That's ok… you are welcome to your opinion. But this is my blog, and I have trapped you into reading mine :) So please bare with me.

I believe that testing is MORE vital in the failure code paths than the normal execution blocks. My reasoning is that in my experience normal operations get tested during both functional and unit testing anyways... rarely are failures tested to see how resilient the code is on failure. Many attack vectors have shown to compromise code through the results of failure or state change, rather than execution of success. As such, code coverage is an awesome tool to ensure you can write particular 'resiliency' tests and force you to see if you are hitting those exception handling routines. For people writing kernel mode code, I think it's even more paramount. Structured Exception Handling (SEH) is routinely used incorrectly causing bigger problems in the kernel than if they just let the system BSOD. I have seen ugly code where on failure a trap does nothing more than allow the code execution to continue, typically in an unstable state which will end up crashing anyways upstream on someone else's call stack, causing errors in the wrong part of the system. Microsoft's WHQL test process requires kernel code to have a 70% code coverage during testing... now you can use the same metrics against your usermode code within Team System.

There is so much other stuff I saw, my brain hurts. Besides unit tests, Team System includes neat testing types such as Web, Manual, and Load. The web tests were interesting as you could record a session and then play it back. I could see some interesting ways to do tainted data injection testing with this. The manual tests were a way to have a documented process on how to manually test something, and have it come full circle back into Test System in a structured manner. The load tests were interesting as you could apply a collection of events to test load scenarios; want to see how your web service would handle being slashdotted?? Be my guest. They have even integrated bug reporting and source control directly into this entire process; you can now right click on a test that failed and immediately file it as a bug directly within the IDE. I got a shot of this, but like most of the others it didn't turn out very well.

All and all, I now understand why Microsoft believes that unit testing is only one piece of the puzzle. Combining different testing types with code coverage, source control, bug tracking and profiling makes a compelling reason to use a single tool to remove the complexity and integrate into a single, heterogeneous solution. Team System may very well be that solution for many a developer on the Windows platform.

If I have one beef with Team System, it is actually something Microsoft thinks is its biggest strength. It is so tightly integrated that it will not be easy to integrate external source control and bug management tools directly. This could be bad. A development environment who may have shelled out tens of thousands of dollars for a redundant Perforce solution isn't going to look to kindly on the thoughts that you need to now use Microsoft's new and untested source control management system. (Lets not get into the old discussion about how much VSS sucks... everyone knows it... including Microsoft). For me I use Subversion with TortoiseSVN (you can read about my experiences setting it up here) and am now looking at moving my bug reporting tool from BugZilla to something like FogBugz. The last thing I want to do is have to alter our development process again... I am just finishing doing that. I want my existing repo to 'just work'. I'm afraid thats not possible.

Maybe that will change by the time VS2005 ships. Of course, everything I was shown can change by then... so I will reserve judgement till I have it in my hot little hands. But from what Tom and Jason have just shown me I think Microsoft just might have done it right.

Posted by SilverStr at 01:15 AM | Comments (3) | TrackBack

June 21, 2004

Dinner with a Gaggle of Geeks

Trip down to Microsoft was pretty good. Had great weather and enjoyed listening to Diana Krall’s new CD I got for Father's Day as I drove down to Redmond. Had a chance to hook up with a few people on the Microsoft campus, and even had an opportunity to check out some of Microsoft's latest security work during their Trustworthy Computing Fest.

The turn out for the dinner was nice. Good size to allow me to talk with a lot of people and not feel overwhelmed. My apologies to a few people that I didn’t get to spend some time with. It was nice hooking up with Ivan from the Secure Windows Initiative Attack team, and discuss some of the internal security tools Microsoft is using that I was exposed to during the day at the trustworthy computing fest.

I also enjoyed debating the integration of unit testing in all dev products from Microsoft with Jason Anderson and Tom Arnold from the Visual Studio Team System group. With the latest petition that I signed being one of the reasons Jason came out, I enjoyed our discussion on what was coming out of Microsoft. During our discussions it got to a point that it was apparent that being able to see the integration in Team System may do more to answer my concerns than anything. So with that challenge, Jason tool a group of us back to the campus and had a complete demo of their current work. It was fascinating. So much so that I believe it deserves its own entry, so I am going to blog that separately.

I love these geek dinners. Thanks to Robert Scoble for hosting it and putting me up for the night. It was a great experience. It was too bad that we didn’t get time to do some 'wine and wireless' when we got back to his place. Enjoy the bottle of wine Robert. It is a favorite gewürztraminer from my private stock.

Posted by SilverStr at 11:37 PM | TrackBack

Problems with XPSP2 RC2

This weekend I decided it was time to pave over my laptop which I have had working awesome for years with Windows XP; I decided I would go straight to XPSP2 RC2.

It previously was upgraded to RC1, and everything was working fine. But the rebuild has proven that "something" doesn't want to play nice with my laptop.

The biggest problem is video related. I have little green dots on the desktop in a vetical line, making me freak out originally thinking the LCD was fried. After testing I found that its not. Then this morning out of nowhere I all of a sudden had one section of my screen on the top and bottom just go flaky, with black and white vertical stripes. REALLY weird. I simply dragged a window over the effected areas and it repainted fine.... and things seem to be normal again. (For now)

The other issue is that running Windows Update with the runas command no longer works. The permission set for the system seems to tight to allow it to work. By adding *.windowsupdate.microsoft.com to the trusted site you can at least have it identify updates for you. However, it fails to install the updates each and every time I try. I haven't debugged it enough yet to find out why this failure is occuring; I will do that when I return from the Security Summit this week.

Note to the Microsoft crew: Are you guys actually RUNNING with least privilege over there? This is a pretty major thing to break, especially if you wish to promote least privilege as an action which should always be taken. And breaking it "right out of the box" is just bad... you would have clearly seen this was a problem the first time you right clicked on Windows Update and did "run as" if you were a normal user. *sigh*

I was happy to see that RC2 included hooks into its Security Center with GriSoft's free version of AVG. Previously in RC1 it screamed at me that "I was at risk" as it believed I didn't have anti-virus installed when I indeed did. This was good to see, as according to the Security Center I am now safe *chuckle*

"God himself could not sink this ship!"
- The answer given by a deck hand when asked if Titanic was really unsinkable.

Remember absolute security is a myth. So too is the misnomer that by installing XPSP2 you can rely on Microsoft to 'save' you; the belief that security should start and end with Microsoft without any user education is just silly... as was only placing 20 life boats on the Titanic.

Of course, right now the fact is that XPSP2 RC2 isn't something I am worrying about for security... I can't even get it to display right. *sigh* Hopefully a new video driver will be coming out soon. Until then, lets hope the thing works good enough to blog the Security Summit tomorrow. (Assuming of course that there is wireless access at the conference center)

Which reminds me... its time to head down to Seattle. Watch out Seattle, here I come.

Posted by SilverStr at 08:42 AM | Comments (1) | TrackBack

June 18, 2004

Security Brief: Mind Those Passwords!

Keith Brown has published an interesting article on MSDN about how to use a password multiplexer and use a master password to store and manage all your other passwords.

It even includes a tool you can download called Password Minder which uses the crypto in the .NET framework to provide this functionality. Of course, you could always use Bruce Schneier's Password Safe program if you don't have the .NET framework installed.

Have fun. Stay safe. Enjoy.

Posted by SilverStr at 12:53 PM | TrackBack

New security features improved in ASP.NET 2.0

MSDN has released an article discussing the improved security features in ASP.NET 2.0. Topics covered include:

  • Security enhancements in ASP.NET 2.0
  • Server-side security controls
  • User and role databases
  • Cookieless forms authentication

I have found myself lately having to audit some ASP style systems, and this article shows me that there may be hope yet for the technology. (I am not a fan at how mediocre programmers can expose so many vulnerabilities on an online system because of the sheer complexity of the framework that they don't understand)

The article says the information is subject to change as its a preview, but you can get a good understanding of whats around the corner.

Happy reading.

Posted by SilverStr at 12:41 PM | TrackBack

June 17, 2004

Security in WSE 2.0

MSDN TV has published an interesting interview with Benjamin Mitchell and John Bristowe in which they talk about the advanced XML Web service specifications that Web Service Enhancements (WSE) 2.0 supports, focusing on WS-Security. They demonstrate how WSE provides a simple object model that allows developers to secure Web services independent of the transport using only a few lines of code.

If you are into web dev, you might want to check it out.

Posted by SilverStr at 02:27 PM | TrackBack

Using Nant for Windows Kernel Mode Compiles

Well, today I finally bogged down and tackled the world of Nant for automated builds.

Back in April I talked about test driven development in .NET, and discussed how I was looking for a heterogeous solution to build both my .NET code (C# standalone apps) and my DDK code (C kernel drivers), with the final result being able to integrate it with testing tools like NUnit, FxCop and Static Driver Verifier. Nant is designed to build .NET code, but definitely not made for calling DDK compiles directly.

So I wrote a simple kludge today to make it all work nicely. The trick is to realize that the DDK compile environment is nothing more than a cmd window executing the setenv.bat script, allowing you then to run the build -cef within said target. This allows you to target different kernels for different operating systems, different CPU types (32/64bit) and check (debug) or free (release) builds.

So I simply wrote a batch script (I sure felt dirty doing that) which wraps all that functionality, and have that batch script simply return a success or failure to the cmd line. Once that was done... I just set up the Nant environment to call it through <exec>, looking for a simple status code... and walla... automated builds. On a side note, in over 12 years of batch scripting experience I never knew that exit /b 1 existed. What a find. It allows you to force an override return code. Apparently this was adding in WXP. Nice touch!

Anyways, my build file for Nant is actually pretty simple. The part that is interesting for this discussion is basically:


<target name="debug" description="Set debug property for project build">
     <property name="debug" value="true"/>
</target>
<target name="debug-drivers" description="Build Debug Security Driver for all Platforms">
     <property name="debug" value="true"/>
     <call target="drivers"/>
</target>
<target name="drivers" description="Build Security Driver for all Platforms">
     <call target="W2K-driver"/>
     <call target="XP-driver"/>
     <call target="LH-driver"/>
</target>
<target name="W2K-driver" description="Build Security Driver for W2K">
     <exec program="./Tools/drvbuild.bat">
          <arg value="W2K"/>
          <arg value="debug" if="${debug}"/>
     </exec>
</target>
<target name="XP-driver" description="Build Security Driver for XP/WS2K3">
     <exec program="./Tools/drvbuild.bat">
          <arg value="WXP"/>
          <arg value="debug" if="${debug}"/>
     </exec>
</target>
<target name="LH-driver" description="Build Security Driver for Longhorn">
     <exec program="./Tools/drvbuild.bat">
          <arg value="LH"/>
          <arg value="debug" if="${debug}"/>
     </exec>
</target>

The interesting bit is that the drvbuild.bat file takes care of the CPU targets as well as take care of prefast and driver verifier stuff. By using the neat little if="${debug}" to trigger passing in the debug arg, I can basically build a single driver on the fly or all of them:


nant drivers          # Creates release drivers for all kernels
nant debug XP-driver  # Creates Checked version of XP driver
nant debug-drivers    # Creates debug drivers for all kernels
nant LH-driver        # Creates release driver for Longhorn

Yes I know this is pretty boring stuff. But the hard part is over. Now that I can target any kernel, for any CPU, in both Release and Debug builds using the same tool through Nant, I can now port the .NET stuff in and have scheduled daily builds without incident. I just finished adding the ability to have Nant update my internal devteam RSS feed, so not only does it build all my kernel mode code, it updates the feeds to alert all subscribers to the new build's availability.

Posted by SilverStr at 01:43 PM | Comments (1) | TrackBack

June 14, 2004

Watch out Seattle... here I come!

Well, I am going to be down on the Microsoft campus next week for the Security Summit and Robert and I thought we should get together for a bit of a geek dinner. Last time I was down it was great fun, especially hanging with Raymond discussing his knitting hobby and the reason for old APIs that SHOULDN'T still be in the system. (It was a weird site if you didn't know what was going on). To boot I had free wireless thanks the the Bellevue library which made it more interesting!

Anyways, Robert tells me that we are going to meet at 6:30pm at the Crossroads in Bellevue on June 21st. Everyone is invited to come join us.

Pass the word!

Posted by SilverStr at 11:56 AM | Comments (10) | TrackBack

Unit Testing In Visual Studio 2005

Unit Testing support should be included with all versions of Visual Studio 2005 and not just with Team System.

I agree with this petition. Of course, you can always use NUnit!

Posted by SilverStr at 07:13 AM | TrackBack

June 11, 2004

Port Knocking with Cerberus

Since I have received a bunch of emails this morning for a copy of my presentation I did last night on my 'Cerberus' ICMP port knocker, I have decided to just put it online and be done with it. You can get it here.

If you weren't at the LUG meeting last night, many of the slides won't make a lot of sense. Five years ago I wrote a ICMP listening daemon that would look for specially crafted packets. When a pattern within the icmp type 8 packet (ping) was found, a simple but effective auth lookup can be performed and then action can be taken based on authorized rights for the requesting party.

I have used this technique for years. It allows me to send a single ping anywhere in the world and have machines execute code without having to actually log in. I use this to open up firewall ports dynamically (kinda like what traditional port knockers do), run Nessus and nmap scans against targets while in the field and even use it to establish point to point VPN with FreeSwan. It has been very beneficial to be on a client site, and be able to use my WAP enabled phone to connect to a page with a perl backend with Net::RawIP, enter in an IP of the clients outside port and have a complete scan report sent to his email while sitting in a meeting.

Now adays port knockers have the potential of doing a lot more (remember I wrote this 5 years ago) so this is pretty much boring to most of you. But Cerberus has served me well. And I decided to finally talk about it at the user group meeting last night. And you are welcome to take away anything you can from the presentation if you like.

Posted by SilverStr at 11:47 AM | Comments (1) | TrackBack

New CSI/FBI Computer Crime and Security Survey Out

For the past 9 years the Computer Security Institute has done a joint study with the FBI on computer crime and security. The 2004 study has just been released and you can get it here.

Here were some of the key findings in this years research:

  • Unauthorized use of computer systems is on the decline, as is the reported dollar amount of annual financial losses resulting from security breaches.
  • In a shift from previous years, the most expensive computer crime over the past year was due to denial of service.
  • The percentage of organizations reporting computer intrusions to law enforcement over the last year is on the decline. The key reason cited for not reporting intrusions to law enforcement is the concern for negative
    publicity.
  • Most organizations conduct some form of economic evaluation of their security expenditures, with 55 percent using Return on Investment (ROI), 28 percent using Internal Rate of Return (IRR), and 25 percent using Net Present Value (NPV).
  • Over 80 percent of the organizations conduct security audits.
  • The majority of organizations do not outsource computer security activities. Among those organizations that do outsource some computer security activities, the percentage of security activities outsourced is quite low.
  • The Sarbanes-Oxley Act is beginning to have an impact on information security in some industries
  • The vast majority of the organizations view security awareness training as important, although (on average) respondents from all sectors do not believe their organization invests enough in this area.

If you are in the infosec space, you really should take some time and read the report. You can really get a trend analysis of what has changed in the last few years, and where the industry will be going.

Posted by SilverStr at 09:38 AM | TrackBack

June 09, 2004

Book Review - The E-Myth Revisited

I have never been so engaged in a book focused on business building before. I just started reading "The E-Myth Revisited: Why Most Small Businesses Don't Work and What to Do About It" on Monday, and I have already finished reading it for a second time. Thats right, I read the 268 page book cover to cover twice in 3 days.

The book was so interesting to me that while away on business I stayed an extra day on Vancouver Island and simply read. I couldn't put it down. It appealed to me on so many levels as an entrepreneuer I found myself so emmersed in the content that when I finished it I was in awe of how applicable it was to me as a small ISV.

The concepts in this book are quite different to what I normally read as it relates to building business. The idea is to work ON your business, and not IN it. Now this in itself isn't new; I take a complete day every month to basically update and work on my corporate strategic plan, taking ownership by then presenting my vision to my board of advisors. What is really different here is the approach Michael Gerber takes by having you treat your business as a turn-key operation, sometimes more commonly referred to as a franchise. (Even though it isn't).

Why a franchise? Well, consider this: 80% of all new startup businesses fail in the first year. Of the remaining 20% that do make it past the first year, 80% of them will fail in the next 5 years. In other words, out of every 100 companies that start up, only 4 of them will make it past 5 years. Now here is the interesting stat... 75% of all franchised businesses are successful and make it past 10 years! Why?

Well, franchises use "systems". These are business processes that the parent company has built and refined to be successful. To a point they have a known result each and every time they do something. Its a recipe for success. They have built a map, "operating procedure manuals" on how to do things right, consistently, all the time.

Think of McDonalds. When Ray Kroc created the first McDonald's he made sure that every process, from prepping the food to communicating with the customer, was documented. From colors to advertising, hiring to training... there is a system in place for everything in the business. And he can make uneducated teenagers do this successfully. You can go into any one of their restaurants worldwide and get the same hamburger, the same taste, the same expectations. Ok, so maybe you don't have the highest expectations. But you know what you are going to get. Each and every time. And that is why they make billions of dollars every year selling a hamburger for a couple of bucks.

So how does that apply to a software company? Well, I can't speak for other companies... but for me the driving force is the customer. My company's focus is on protecting our customers and their information. That was one of the core reasons I got into this business. In the last company I built we had a major failure when it came to our expectations of what we were to do with the customer pre, during and post sales. It wasn't well defined. It wasn't well documented. And we had breakdowns. And we were amazingly lucky. We benefited from having some of the most amazing customer service reps that we constantly were complimented on our service to our customers. But truth be told, it wasn't the system we had in place that was successful but a few amazing people. And we had several occasions of breakdowns when the wrong people got involved. And it didn't scale at all. And the same problems existed in fulfillment... in one case costing us tens of thousands of dollars in business on a single recall because a process wasn't followed and we shipped hardware that wasn't even programmed. We had amazing and impressive recovery, thanks to the manager of the department taking quick and decisive action when this error was found, but it should never have happened in the first place. And it wouldn't have if we had the right "system" in place for that particular function.

This book explains how to build those systems. And utilize them in a way so that everyone in the business takes ownership in their areas of their responsibility. And knows exactly what to do. Refining the process as it goes along, you truly can build a recipe for success. To know when you spend X to do Y, the result of Z will occur. Not by extraordinary people doing normal tasks, but normal people doing normal tasks extraordinarily... each and every time!

Done right, you can learn how to work ON your business... not IN it. Learning how not to be a slave to the business, but have the business serve you. And that is an important differentiator. If you know me, you know I am a workaholic... and have a passion for what I do. But I have always been serving the business, working IN it as a technician instead of working ON it as an entrepreneur. And this book opened my eyes to the difference.

Posted by SilverStr at 10:34 PM | TrackBack

June 07, 2004

Threat Modeling Resource Page


"To protect your applications from hackers, you have to understand the threats to your applications. Threat modeling is comprised of three high-level steps: understanding the adversary’s view, characterizing the security of the system, and determining threats. The resources on [Microsoft's] page will help you understand the threat modeling process and build threat models that you can use to secure your own applications."

- Taken from Microsoft's Threat Modeling Page

Posted by SilverStr at 11:29 AM | TrackBack

Book Review - Coder to Developer

Over the weekend I finished reading "Coder to Developer: Tools and Strategies for Delivering Your Software". This book wasn't what I was expecting, but in a good way.

I originally heard about this book through an article by Joel, in which he posted the foreword he wrote for this book. (Its well worth the read). It sounded interesting and I started looking for the book up here in Canada. Nada. No one is carrying it. No worries I told myself, and ordered it from Amazon.

When I got it I started reading it right away, and I found that I liked the approach of this book. Where most of the books I read are focused on the process of building big business, securing multi-national corporations or developing in the largest of team environments, this book is really targeted to the small ISV... which is me! This made for a more enjoyable read as I could better identify with the authors comments and points of view.

The layout of topics covered was in my opinion bang on as it relates to issues faced when moving from "coder to developer". Topics included:

  1. Planning you Project
  2. Organizing you Project
  3. Using Source Code Control Effectively
  4. Coding Defensively
  5. Preventing Bugs with Unit Testing
  6. Pumping Up the IDE
  7. Digging Into the Source Code
  8. Generating Code
  9. Tracking and Squashing Bugs
  10. Logging Application Activity
  11. Working with Small Teams
  12. Creating Documentation
  13. Mastering the Build Process
  14. Protecting Your Intellectual Property
  15. Delivering the Application

The only problem, in my opinion, is that the author really didn't go into the right depth in some areas, and quite frankly didn't have the right experience in others. As an example, the protecting IP chapter felt unfinished to me, with mostly a cut and paste of different licensing, from GPL to proprietary and back again. I was also hoping for more indepth coverage on using obfuscators. I know what they are, I want to see how to better integrate them into my build process. Of course, I think the "Coding Defensively" chapter could be expanded on, but if you are a regular here you can kinda expect that :)

But these are small short comings from an otherwise great book. One of the things I liked was how he approached teaching concepts. Throughout the book he followed the building of a program called "Download Tracker" where he showed the concepts through direct usage in his project. This was extremely effective for me as I have had difficulty embracing how I can use NUnit for unit testing in my work. Through his examples, I can now see how to implement unit testing with NUnit in my .NET standalone apps, and will do so in the near future.

Building on that, I also liked that he would show how to use different tools to accomplish the same thing via a different approach. In many cases the author would point out tools along the pricing scale, allowing for the budget-conscious ISV to still get many of the benefits of certain types of tools... even if they can't afford others. He even provided URLs to these tools so they would be easy to find.

What suprised me from this book was how .NET oriented it as. Although you can still walk away with the concepts and principles of the book if you code on other platforms, this REALLY was focused for the .NET framework. This was ok for me, as I actually am using it... but I could see some grow tired of references if they were developing on a Unix platform.

Great book. Great content. Easy read. Everything you want out of a book.

I am going away on business for the next couple of days, and have my next book in hand. Switching back to a business focus I am now reading "The E-Myth Revisited:Why Most Small Businesses Don't Work and What to Do about It". Review to follow.

Posted by SilverStr at 11:16 AM | Comments (1) | TrackBack

June 04, 2004

New version of WinDbg Out

Oh man am I ever happy with the new changes in WinDbg.

If you write any sort of kernel code, you NEED to upgrade to the new version. (Unless of course you are harnessed to SoftIce that is) The updated UI is REALLY nice... almost Visual Studio like even.

Here is a list of some of the changes in the Debugging Tools for Windows:

  • Now supports Longhorn
  • New user interface (UI) management capabilities in WinDbg
  • Made many improvements to the !analyze extension
  • Improved extension interface documentation (debugext.chm).
  • Execute a series of debugger commands programmatically or create a more complicated "program" using flow control. This allows you to conditionally execute commands or even pipe the output of one command into another. New control flow tokens include .foreach, .do, .for, .while, .if, .elsif, .else, .catch, .break, .continue, and .leave. Aliases are used as the "local variables" in these programs SWEET!!
  • Multiple new options in setting aliases

WinDbg now features enhanced UI management capabilities. Support for window docking, window tear-off and window tabs has been added to allow users more flexibility in configuring the user interface. Here is a screenshot of my new default layout (of course there is nothing attached, so you can't see any src code):

Time to upgrade? I think so. Go get your free copy here.

Posted by SilverStr at 05:23 PM | TrackBack

June 02, 2004

Secure Coding: Running Processes as a Different User

Shawn has posted an interesting entry about how in Whidbey you can use the Process class to specify the user context that the new process should run under. This differs significantly from current approaches, as you normally have to P/Invoke CreateProcessWithLogonW to do it through impersonation.

I've talked about different approaches before when I discussed spawning external processes securely in Windows and using restricted tokens to execute a process, but this is much more elegant. It's nice to see the Process class add new functionality through the exposure of three new properties on the ProcessStartInfo class: Domain, UserName, and Password.

Here is a snippit that Shawn used (although of course you would do better input validation than that :) ):

Console.Write("Username: ");
string user = Console.ReadLine();
string[] userParts = user.Split('\\');
        
Console.Write("Password: ");
SecureString password = GetPassword();

try
{
    ProcessStartInfo psi = new ProcessStartInfo(args[0]);
    psi.UseShellExecute = false;
            
    if(userParts.Length == 2)
    {
        psi.Domain = userParts[0];
        psi.UserName = userParts[1];
    }
    else
    {
        psi.UserName = userParts[0];
    }

    psi.Password = password;

    Process.Start(psi);
}
catch(Win32Exception e)
{
    Console.WriteLine("Error starting application");
    Console.WriteLine(e.Message);
}

Anyways, nice find Shawn!

Posted by SilverStr at 04:02 PM | Comments (1) | TrackBack

June 01, 2004

Economics of Information Security

Alex has an interesting post pointing to a collection of links Ross Anderson has on Economics and Security.

I've read most of the information there before, and the real gem within that entire page is the link to a paper Ross wrote on Why Information Security is Hard - An Economic Perspective. It was one of the first papers dedicated to information security which really touched on the heart of the matter. If you haven't had a chance, you might consider reading it. It applies economic analysis to explain a number of phenomena that security researchers had previously found to be pervasive but perplexing.

Happy reading!

Posted by SilverStr at 07:40 AM | Comments (2) | TrackBack

Bruce Schneier Says Microsoft Is Proving Security is NOT Their #1 Priority

When Bruce speaks, most people listen. Yesterday he got on his pulpet and discussed his views on Microsoft's stance on security.

The gist of it? Microsoft is showing that security is NOT their #1 priority in the wake of the release of XPSP2. Why? Because more than anything else Microsoft has said or done in the past few years, NOT releasing XPSP2 to pirated versions of Windows XP proves to him that security is not the company's first priority. Here was a chance for Microsoft to do the right thing: to put security ahead of profits. Here was a chance to look good in the press and improve security for all its users worldwide. Microsoft says that improving security is the most important thing, but its actions prove otherwise.

I can't say that I disagree with him. In this era of computing not only do you have to ensure your networks are secure, you have to rely on other people to secure theirs. If you don't patch pirated versions of Windows XP, these machines will continue to be carriers of malicious code as vulnerabilities will stay open, as will bad security processes such as running without a firewall, anti-virus etc etc.

Of course, if the digital underground's response is anything like XPSP1, pirates will have a work around within days.... but thats not the point. Its hard to weight this off when Microsoft has every right to protect its profits. Or does it? Some say Microsoft has a higher responsibility in ensuring our desktops are safer because of the very dominance they enjoy. Quite frankly, most pirates aren't going to buy XP if they haven't already. And these people are typically the same people ripping warez off of P2P which is infected with hostile code just waiting to spread.

I think Bruce could sum up this post way better than I could... "SP2 is an important security upgrade to Windows XP, and I hope it is widely installed among licensed XP users. I also hope it is quickly pirated, so unlicensed XP users also can install it. For me to remain secure on the Internet, I need everyone to become more secure. And the more people who install SP2, the more we all benefit".

Posted by SilverStr at 07:26 AM | TrackBack