February 27, 2004

HFNetChkPro adds Linux and Agent Capabilities

Now this is an interesting find announced at the RSA conference. Shavlik Technologies has added Linux and Agent Capabilities to HFNetChkPro Patch Management Solution. For those who don't recall, these guys wrote the engine that is used in the Microsoft Baseline Security Analyzer. And up until now... all their products were focused n Windows patch management.

HFNetChkPro will now give network administrators the option to roll out patches via agents for customer applications that require them. In addition, based on the recent acquisition of Gibraltar Software technology, a separate version of HFNetChkPro will allow customers running Red Hat Linux to scan the network for Linux operating system vulnerabilities, identify machines to be patched, and automatically roll out and validate patches.

Nice. Really nice. Only draw back... its for RedHat only. I hate it when the world thinks Redhat == Linux, making announcements "generic" when they should be more specific.

Then again, I bet it wouldn't be hard to get an agent running on Debian. Any takers?

Posted by SilverStr at 04:38 PM | Comments (1) | TrackBack

February 26, 2004

New Productivity Virus on the Net

Haven't posted anything fun lately, so I think I will change that.

Shawn reports of a new productivity virus on the Net. I must admit... I got nailed by it too. :(

Remember the Warthog vid called "Warthog Jump: A Halo Physics Experiment" where they were showing a bug in the physics in the game Halo? Throw some nades under the vehicle and watch it launch. Was a well done video at the time (in case you haven't seen it)

Well, now there is a flash version you can play yourself. It just sucked me in for a good 30 minutes. Unfortunately... I can't get past level 30.

30 levels in 30 minutes. How about you?

I am sure if I spent some more time I could get by it... but it already drained to much out of me. I hope it has the same effect on you :)

Posted by SilverStr at 11:34 AM | Comments (7) | TrackBack

Automating Windows Patch Management

One of my favorite things about Debian GNU Linux is the powerful tool apt. If I want to update my system with the latest patches I just type:

apt-get update; apt-get upgrade

It will check for new updates, find and meet dependancies, download and then install what is needed. I can even automate this in cron if I choose to. And of course, if I need to add a new package thats in my apt list, I just type:

apt-get install new_package

I can even use my own local repository if I like, not having to go to the Net if I choose to mirror it instead.

One of the things I get bugged about a lot from the Linux crew I sometimes hang with is how there is no "clean" patch management system for Windows. In an almost condescending tone they like to comment on how good tools like apt, emerge and red carpet are and how bad Windows Update is.

Well, today I decided I would point out a couple of great SecurityFocus articles on using Microsoft's Software Update Service (SUS) to do just that.

Part 1: Automating Windows Patch Management
Part 2: Automating Windows Patch Management

This is something I have been looking into recently as I really want to get a handle on how to create MSI files which can be added to SUS to distribute and maintain our products in an automated manner. These articles were a good introduction.

If you stay tuned, the author plans a third installment to the series to discuss tools (commercial and free) that can be used in conjunction with SUS. Should be an interesting read.

Enjoy.

Posted by SilverStr at 08:27 AM | Comments (6) | TrackBack

February 25, 2004

Electronic Crime Scene Investigation: A Guide for First Responders

Computers and other electronic devices are being used increasingly to commit, enable, or support crimes against persons, organizations, or property. The "Electronic Crime Scene Investigation: A Guide for First Responders" guide (NCJ 187736) is intended for use by law enforcement and other responders who have the responsibility for protecting an electronic crime scene and for the recognition, collection, and preservation of electronic evidence. The document is not all-inclusive. Rather, it deals with the most common situations encountered with electronic evidence.

The document is from 2001, and a lot has changed in the last three years. But it seems to still be a pretty good foundation to learn from.

Happy reading!

Posted by SilverStr at 08:58 AM | TrackBack

February 24, 2004

Microsoft to include antivirus in XPSP2?

According to Internet News Microsoft has now included anti-virus technology into the next beta of XPSP2. I guess its not all entirely suprising, since they did BUY a Romanian anti-virus company.

What gets me about this is that if the article is true, Microsoft is turning on itself. While buying the antivirus company Mike Nash (a Microsoft VP in charge of the security business unit) told anti-virus vendors that their product lines were safe. One of his quotes was that he promises that Microsoft "wouldn't be encroaching on their space". I wouldn't be suprised now if Symantec and McAfee cry foul and look into antitrust suits. Especially since the article points out that some anti-virus products have compatibility problems running now on SP2.

I don't know what to make of it. From a business point of view its sad to see Microsoft squeeze out more players in this field. However, the fact users will get a FREE anti-virus product is worth noting. Personally I would have rather seen Microsoft partner with the antivirus and firewall vendors and capitalize on those companies' expertise, rather than acquiring the technology and folding it into their business as 'yet another product'. Focus what you are good at, and let other businesses focus on their core competencies. Everyone could make more money... and we would have vastly superior products out there.

Posted by SilverStr at 02:48 PM | Comments (7) | TrackBack

Gates speaks about Security at RSA Conference

Back in December I said that Bill Gates have guts. When he announced he was going to go to a premium security conference like RSA and speak... I was shocked.

Well he spoke today, and I don't think anyone would argue that he had some real "grit" in his presentation. He announced a few things of interest:

  • The improved firewall. (Check out this screenshot courtesy of Neowin.net)
  • A new "Windows Security Center" that offers a centralized place to view security settings and get advice on how to evaluate a PC's vulnerabilities. (Check out this screenshot)
  • A new technology called "active protection technology" that Microsoft says can make computers more resilient in the face of an attack.
  • Announced a new partnership with RSA to tie in SecurID tokens more closely into Windows (Yaah!!)
I am mostly pumped about the last one. I have always liked two factor authentication as a way to address the weakness in passwords. Problem is, no one can afford an AceServer and Ace/Agents all over the place to properly utilize it. With this announcement, hopefully prices will either drop... or a free scaled back alternative will be available to desktop users.

Update: Seems NeoWin doesn't like people linking to their screenshots. You should go here and click on them yourselves. Thanks to Arc for pointing this out.

Posted by SilverStr at 11:58 AM | Comments (3) | TrackBack

February 20, 2004

MSDN Webcast Review: Writing Secure Code - Threat Defense

Where Joel's presentation on Wednesdays was based around people and practices, today's presentation was about technology. And just like Wednesday, he did an excellent job.

He covered a great amount of information which basically breaks down to:

  • Defending against memory issues (buffer overflows etc)
  • Defending against arithmetic errors (integer over/underflows etc)
  • Defending against cross-site scripting
  • Depending against SQL injection
  • Defending against canonicalization issues
  • Defending against cryptography weaknesses
  • Defending against unicode issues
  • Defending against denial of service
I was REALLY impressed with the demos. As an example, when I was first learning about buffer overflows years ago I wish I could have watched this presentation to learn how to trample the stack. In 3 minutes he did a better job than most online tutorials do. I was also happy to see him impliment safe string functions in <strsafe.h> for his examples. I use the same type in my kernel code using <ntstrsafe.h>, and appreciate that he is teaching developers to use it as part of their normal daily programming. Course he later then didn't follow his own advice and I saw a few strcpy, but I will chalk that up to demo purposes code only.

I had to chuckle when he whipped out Windows calculator to show how integer overflows happen by typing in 65533 and then adding 10, switching to binary view and looking at the 17bit output. Neat prop to easily show how a simple math calc could cause a 16 bit integer overflow. I will need to use that trick when teaching others about this common coding problem.

Overall, this was the best presentation of the whole week. Enjoyable, educational and well worth the time. You should consider watching the on-demand presentation which will be online in a few days here. In the meantime, I kept a copy of the power point slides of the presentation, which you can get here.

Posted by SilverStr at 12:28 PM | TrackBack

MSDN Webcast Review: Essentials of Application Security

Well, this has been the most "knowledge technical" presentation I have viewed this week. A great period of time was spent on authentication and authorization, and the presenter did a good job showing different ways of accomplishing this.

I don't have much to say on this presentation since it was SO web app focused; it wasn't about secure programming techniques, but how to use technologies. I wasn't bored, but wasn't excited either.

Guess what? In the next few days the presentation will be online on-demand which you can check out here, which will let you check it out for yourself. In the meantime, I kept a copy of the power point slides of the presentation, which you can get here.

Posted by SilverStr at 10:22 AM | TrackBack

February 19, 2004

MSDN Webcast Review: Application Hacking Techniques

Well well well. An actual presentation on attacking a target that wasn't scripted from a "Hacking Exposed" book. (I have low tolerance for people who have no real knowledge on attacking targets past reading "Hacking Exposed" once and pass themselves off as 'leet h4X0rs).

To be honest I enjoyed watching the SQL injection demos the most. I even learned a neat new command through general extended procedures:

EXEC master.dbo.xp_cmdshell 'dir C:\'

Well, ok I already knew that one... but I DIDN'T know you could manipulate registry keys:

EXEC master.dbo.xp_regread 'HKEY_LOCAL_MACHINE', 
'SOFTWARE\Microsoft\Windows NT\CurrentVersion',
'CurrentVersion',
@ver OUT

Thats right, you can read/write/delete registry keys in SQL. There is a tonne more commands you can execute which I stumbled upon while looking into this during the demo like checking media types and even checking if files exist on a system!

Overall this was a pretty good presentation. There were a couple of slow parts, but I was able to use that time to figure out how to install EmbPerl (thanks Wim and Alan!) into Apache and get it running cleanly over SSL.

As always, in the next few days the presentation will be online on-demand which you can check out here, which will let you check it out for yourself. In the meantime, I kept a copy of the power point slides of the presentation, which you can get here.

Posted by SilverStr at 12:55 PM | Comments (4) | TrackBack

February 18, 2004

Another critical mremap() bug found in all Linux kernels

A critical security vulnerability has been found in the Linux kernel memory management code inside the mremap(2) system call due to missing function return value check. This bug is completely unrelated to the mremap bug disclosed on 05-01-2004 except concerning the same internal kernel function code.

Time to get patching. You can read more about the vulnerability here.

Posted by SilverStr at 12:52 PM | TrackBack

MSDN Webcast Review: Writing Secure Code - Best Practices

What a great presentation. Joel did a great job. It was detailed on secure coding best practices and included threat modeling... covering both STRIDE threat modeling and Attack Trees. It was interesting to see him present the DREAD model for rating threats as superior to using the common formula:

risk = Probability(chance) * Damage Potential (damage)

I've been using r=c*d for over a year instead of DREAD, and found it works well for me. I am going to need to read up on some secondary documentation on DREAD and see why Microsoft prefers that method.

The demos were pretty good too. Joel even showed how to properly do development as a non-admin user, which should benefit most people who don't. If you don't know WHY thats important, consider reading my CodeProject article about developing with least privilege.

I did learn one interesting component in VS.NET. I always write my validation routines by hand and deal with it that way. I didn't know there was an ErrorProvider control that makes that simpler. Will have to look into that.

I REALLY liked how Joel showed how to use the Data Protection API (DPAPI) to encrypt and decrypt SQL connection strings. I wish more people knew how to do this. If you hardcode your connection strings, you really should watch this presentation and learn how to fix that.

All and all, this has been the best presentation so far.

In the next few days the presentation will be online on-demand which you can check out here, which will let you check it out for yourself. In the meantime, I kept a copy of the power point slides of the presentation, which you can get here.

Posted by SilverStr at 12:35 PM | Comments (3) | TrackBack

Order the Windows Security Update CD

Microsoft has a web page up now that will allow you to order the Windows Security Update CD free of charge.... and they will even ship it to you. This CD includes Microsoft critical updates released through October 2003 and information to help you protect your PC. In addition, you will also receive free antivirus and firewall trial software.

This CD is only available for Windows XP, Windows Me, Windows 2000, Windows 98, and Windows 98 Second Edition (SE).

They say to allow 2-4 weeks for delivery.

Enjoy.

Posted by SilverStr at 01:00 AM | TrackBack

February 17, 2004

Help: Any Outlook 2003 Gurus Out there?

Are there any Outlook 2003 users out there that can explain why getting S/IMAP data sucks big time? Both Thunderbird and Outlook Express works awesomely fast checking mail on the same system, and never hangs. Yet Outlook seems to stall, timeout and take its frigging time. To top it off, its threading SUCKS... its not done in the background very well. When its downloading headers I cannot basically read any other mail, reducing my productivity significantly. The app just hangs until the download is done, assuming it ever finishes. Right now its been sitting saying "Send/Receive Status 98%" for over the last hour, and I can't check my mail in the folder its hung on. And this happens a LOT.

Basically its really unresponsive, slow and clunky. I can see why Chris Pirillo dropped it. Yet others say their's works great. There has to be some configuration issue here. Just not sure what. This is a brand new OS and app install. I have given it two full weeks of use, and it hasn't gotten any better (although I must admit having all my calendaring has been nice when Live Meeting events occur.) I really want to take advantage of the other features in Outlook, and can't do that if I can't even check my mail without issue.

Alternatively, does anyone know of a calendaring and contacts plugin for Thunderbird that will sync to Palm on XP? I will drop Outlook like a stone if I can get everything to work together.

Posted by SilverStr at 05:10 PM | Comments (8) | TrackBack

MSDN Webcast Review: Computer Crime and Security

This morning I sat in the MSDN LiveMeeting Webcast "Computer Crime and Security". Sitting might be a generous term. I was bored. I just couldn't stay focused.

It started with a presenter basically regurgitating the 2003 CSI/FBI Computer Crime and Security Survey. Pretty slides and all... but I already read them in the report. And went through them in depth during the CSI seminar. I am sure for people that knew nothing about computer security, this would be interesting stuff. And I don't knock the presenter for that... that was his target audience. (Note to self, do not attend any more Level 100 seminars)

One thing I think was TOTALLY missed was that about threat modeling. During the presentation it was talked about in one bullet. I questioned them on this, and I was told that it could be done as part of the technical and risk assessment. That’s true, and they are right. But I think that really should have been expanded on.

MSDN would do well to actually provide a webcast dedicated to threat modeling. Hell, Michael Howard could do an excellent job presenting STRIDE threat modeling to all the developers out there. And if he doesn't have the time to do it, I would be more than happy to.

Anyways, with that part of the presentation finished we moved onto a new presentor that really knew his stuff. Although he knew his stuff, I don't think that knowledge really got passed on. He did all the traditional "no-no's" when it comes to speaking. He spoke to quietly, to quickly and assumed we caught it all. The technology didn't help either, as Live Meeting just couldn't keep up with the app sharing on my end; in some cases dialog boxes and windows flashed like mad without us being able to follow it as it wouldn’t redraw before he would close it, or scroll by.

Another thing that made it difficult was his accent. Combined with the soft speaking and the speed of his presentation, his accent made it difficult to follow his thoughts. There wasn't much that could be done about that.

The final presenter surprised me. I walked into the presentation expecting Microsoft focused tools and technologies. During the introduction we were told that a gentleman from Sanctum would present AppScan Developer Edition. I thought to myself "oh god a sales pitch… not good." I was pleasantly wrong. Although he did show the tool, it wasn't shoved down our throats. Looked like a pretty good tool too. Of course, I don't do ASP.NET coding, so its pretty much useless to me... but it could really help some of you web code monkeys out there.

In the next few days the presentation will be online on-demand which you can check out here, which will let you check it out for yourself. In the meantime, I kept a copy of the power point slides of the presentation, which you can get here.

Posted by SilverStr at 10:54 AM | TrackBack

mySQL now supports Stored Procedures

This is more for me than anything else. Just came across the fact that mySQL 5.0 now supports stored procedures. (Finally!) It follows the syntax for SQL:2003, but there are plans to consider PL/SQL and T-SQL in the future.

Documentation can be found here.

Perfect timing, as I am just preparing to work on a project that will need that.

Posted by SilverStr at 09:51 AM | Comments (3) | TrackBack

February 16, 2004

MSDN Webcast Review: How to Perform a Security Review

This morning I sat in the MSDN LiveMeeting Webcast "How to Perform a Security Review". It wasn't very technical, but I have to say it wasn't all that bad. Not much new here for me, but it was a refreshing to hear others talk about it. The presenter knew what he was talking about, and I think that makes all the difference when presenting such a topic. In the next few days the presentation will be online on-demand over at http://www.microsoft.com/seminar/events/webcasts/ondemand.mspx, and you can check it out for yourself.

In the meantime, I kept a copy of the power point slides of the presentation, which you can get here. It will do a better job to explain the presentation that I could ever do. You won't see the demos, but you can watch that later.

During one point in one of the demos they were using FxCop and I noticed they had a MsInternalSecurityRules.dll, which isn't part of the FxCop distro for version 1.23. Why not? Whats so secret for Microsoft security that we shouldn't know? Anyone in the FxCop team want to respond to that?

Update: Microsoft got back to me about MsInternalSecurityRules.dll. Apparently they retired the DLL and moved the rules into the general FxCop distribution files.

Posted by SilverStr at 12:17 PM | TrackBack

February 14, 2004

Shattering the crystal and poking holes in the black box

Let's shatter the crystal and poke holes in the black box.

Recently there has been some banter between a couple of articles on DevX and O'Reilly which is focused on detailing if Open Source Software is secure, or if it is a fertile ground for foul play. Both articles have some compelling points of interest, and yet both are flawed. It is impossible to defend either side of the equation when both sides are entrenched with "grass roots" style feelings which perpetuate fiction from fact. FUD seems to be a shield for everyone now adays, and quite frankly it never aids in honest and unbiased point/counter-point discussion.

Rather that regurgitate the strong points of either argument, allow me to put this into context from a secure coding perspective. Although it is a tangent from the original baseline discussion, I think if you read through my thoughts here, you will see what I am getting at.

The reality is that both sides miss the fact that THEORY and REALITY doesn't mix when it comes to software engineering of today, especially when talking about the crystal box approach to secure code, and that of the black box approach.

Whenever OSS is discussed in the context of security, the position always ends up leading towards its "golden child"… that of strong cryptography. Since the days of Bletchley Park in World War II, encryption ciphers have been typically reviewed for years by experts in the field. The source is available so cryptographers can audit the entire algorithm and build proofs to show its strengths and weaknesses. When NIST decided on building a new encryption standard for Federal Information Processing (As part of FIPS standards) back in 1997 they intelligently turned to the crypto field and had the entire process reviewed. AES has under gone a thorough audit process. It took them over a year to get 15 original algorithms reviewed and submitted for consideration. From there after rigorous testing and cryptanalysis for a few months 5 ciphers survived analysis from experts. Finally in October 2000, 3 years after the project began, NIST announced the selection of Rijndael as the proposed algorithm for the Advanced Encryption Standard.

Sit back for a second and take that in. This open process, from original design specification took YEARS of audit and evaluation from EXPERTS in the crypto field. Consider that as we discuss OSS in general.

Indeed OSS as it relates to crypto is a good thing. But this was because there was stakeholder responsibility involved. Cryptanalysts put their credibility, expertise and jobs on the line in this process. This is not always the case when OSS is written. Many projects are written by CS students in college that like the ideals of the open source movement and want to hack… typically for experience, sometimes for fame. There is a sense of accomplishment, but typically not that of responsibility. This of course is not ALWAYS the case, and there are plenty of great OSS software like the Linux kernel, Apache, Samba and OpenOffice that don’t follow this at all. Which gets me to my point.

Gene Spafford once said that "when the code is incorrect, you can’t really talk about security. When the code is faulty, it cannot be safe." I have used this quote before in other entries because I really think it gets to the heart of the major problem as it relates to secure programming of today. Coding for coding sake is one thing, but designing safe and secure software that our critical infrastructure and businesses use is a totally different beast. And the development methodologies that base around the expectation of developer responsibility falls into different categories, depending on the programmers involved.

It is true that with OSS, anyone can review the code and audit it. In REALITY how many people ACTUALLY do this? Be honest with yourself. When was the last time you went through every line of the Linux kernel? When a security patches is released for Apache, how many of you go through a significant code review? How many of you ACTUALLY just run apt-get or emerge and suck down the latest binaries? How many of you launch Red Carpet and download the RPMs? How is this any different than running "Windows Update"?

You see THEORY and REALITY have no place mixing when arguing points about either crystal or black boxed security. Assuming you are following best practices as it relates to patch management on your systems you grab the latest fix and apply it to your systems. You trust your package source and simply install it. Hey, you might even compile it. There is nothing wrong with that. However… you leave that trust in the source. The same source you don’t look at when you type "make".

In the past year we have seen the compromise of the GNU FTP server, compromise of Debian’s development servers, the attempted compromise of the Linux kernel tree, and the release of parts of Microsoft Windows 2000 master sources. These are ALL vendors of trust. We rely on their best practices to protect us. Be it crystal or black... none are perfect.

Coming back from that tangent for a second lets reflect on actual project/product. Typically (but not always) black boxed software comes from a commercial vendor that has a business interest in seeing it succeed. They are trying to protect their intellectual copyright and possibly wish to try to use security by obscurity. (Which rarely works by the way). Yet they typically have a sense of responsibility in maintaining their software. They have a financial interest in doing so. When looking at this from a secure programming objective however, history has shown these vendors fall flat on their face.

Why? Building secure software has been seen as an impractical goal, because the business has other pressing objectives. Even though secure programming helps them increase efficiency and cut costs in the long run as it relates to the development lifecycle, the burden of company growth has them writing software cheaper and faster which typically isn't of the best quality. But that’s changing.

Although history is riddled with vagrant disregard for secure code quality in the operating systems and applications we use, it is changing because the very industry that accepted this behavior in the past now requires safer and more secure software. If you look at my past entries I have pointed out examples where Microsoft's impressive security structure has continued to build a developer environment fusing secure coding practices into their daily lives. This fundamental shift continues to strengthen the design practices of black box software, which in time should result in a safer and more secure computing environment. We now see companies building commercial black boxed software with proper functional design specs, threat models, and test plans. Code goes through a strict source code management system and gets audited at various levels of development, testing and release. These are all major components to build better software.

With OSS, you rarely see this sort of design thinking into the project. Developers have an itch to scratch and they go do it. They make it work for their needs and hope others get on board with it so it can be refactored, and hopefully audited. SourceForge is full of such projects that rarely get off the starting blocks. More importantly, there are examples of OSS that get used by many in the open source community, but don’t have a strong developer following. Don't get me wrong. There are amazingly talented OSS developers out there. I am friends with many of them, and I spent years being part of that community and writing my fair share of code. However, you can’t wave the "OSS is better because it’s audited" flag when no one cares to get involved with it. Many projects die because there is no one responsible for its growth. Without corporate backing and fiscal responsibility the code is rarely maintained. Successful projects like the Linux kernel, Apache and Samba got there because there was a great developer following... many with corporate backing (in developer time, money or both). And even then many of these projects have taken YEARS to build a system with some sort of respectable code audit facility… which I don't think we can blindly trust. A good example of this was the huge PGP vulnerability that was in "open sourced" code for years before being detected... even though the code went through various different audit processes. Further to this, we have seen the failure of the DARPA funded 'Sardonix' security auditing project going by the way side as security researchers who were part of the project not able to get it going. That's really too bad, as I have huge respect for Crispin Cowan (who lead the project), and would have liked to see him succeed with this.

In the end, code quality and the "correctiveness" of software is determined relative to the specifications and the design put in. Using the term that "open source can be audited" is a futile discussion when people DON’T do it. And many of those that do have no real experience in secure coding practices to do it effectively. There are great examples where I am wrong in that statement (FreeBSD Information handling policies come to mind), but if you look at the entire OSS landscape, I am more generally correct than I am wrong in this statement. Although it CAN be audited... it rarely is. And when it is, it's rarely done by professionals who know what they are doing. (My apologies to the numerous secure programming developers and test engineers I do know that take pride in their work in this area. I am generalizing here, and not referring to you.)

Education is a key culprit to this. Developers are coming out of school and have no secure coding experience. They don’t know how to write defensive code, nor do they know how to audit code for such quality. Much of the code quality we expect in software doesn’t exist because the quick time-to-market turn around of new software sacrifices quality for quantity. And this problem plagues both camps. It's just viewed differently.

Knowing OSS is rarely audited on a routine basis, let's get back to basics here. Any vendor CAN have their source code audited. OSS uses a free and open access to source code trees, and makes this easy. Black box vendors such as Microsoft use Shared Source initiatives, and pay 3rd parties to audit their code. An example of this was the .NET Security framework audit completed before its release. Some vendors, when selling to organizations such as the government and military, require code audit and correctiveness testing through standard bodies such as the Common Criteria Standard or in house code audit teams. The CSE and NSA have entire teams whose function is to do this. Stating that OSS is better for code audits is a fallacy when you look at those responsible for the code. Code quality is going to be dependant on the designers and the engineers, typically being PAID to do it right. You don’t always get that from OSS projects. You can. But you rarely do.

I think both camps are going to see a paradigm shift in the coming years. Especially as more vendors adopt OSS. We see examples of IBM, Apple, Novell and Sun (to name just a few) embracing OSS and putting significant assets... including financial resources into projects. If they do this correctly and don't muddy the development process with business politics, we might begin to start seeing projects have a more focused design structure in the software. The result should include better secure programming practices, which will include better auditing and improved quality as it relates to security. Hell… I can't wait to actually see such func specs, threat models and test plans for many of the open source projects out there. I would love to read these design docs and learn how they would approach such development and testing. We could learn a lot from their practical experience in these successful projects.

Yet as I say this I look over at Redmond and notice the significant investment it is putting on the table for its own processes, and those for its 3rd party developers. I can note several examples where tools like prefast, prefix, AppVerifier and FxCop are being integrated into our tools, helping us to make more secure software. They are investing in the training of outside developers (next week half my calendar is taken up with free MSDN security webcasts that I am attending) and generally are building a strong foundation for the “next generation” of black box software.

In THEORY code quality and code correctiveness are enhanced with access to source code. In REALITY that is only the case if code audits are actually done. And done by those that know what they are doing.

Before I end this, I need to take a moment to go on a tangent and discuss responsible disclosure as it relates to crystal and black box security. This is a totally different aspect where OSS is traditionally MUCH better suited. The incentive for full disclosure when new vulnerabilities are found are much more heightened in OSS because people everywhere can see it. In black boxed systems that use closed source, this isn't always the case. We see the time from vulnerability report to fix much longer in black boxed systems because there doesn't always seem to be the same sense of urgency to fix issues. You can see proof of this in the announcements that are presented on lists such as bugtraq. Companies like Microsoft may take up to 6 months to fix issues where as in OSS, the turn around time is rarely past a few days.

Tangent aside, you will note I am trying not to take sides in this debate. That's because I don't think it matters. The point shouldn't be if access to source code is the issue. It should be about the design and audit practices that are applied to the code base. When the code is incorrect, you can’t really talk about security. When the code is faulty, it cannot be safe. When code isn't audited, you will never be able to know the difference.

Posted by SilverStr at 02:04 PM | Comments (13) | TrackBack

February 12, 2004

Architectural Innovations for Enterprise Forensics

While browsing some of David's writings I came across a paper he presented at the Australian Computer Network and Information Forensics Conference. Entitled Architectural Innovations for Enterprise Forensics, David takes the opportunity to discuss (at a high level) Trusted Computing initiatives such as the Trusted Computing Group (TCG) specifications, Microsoft's Next Generation Secure Computing Base (NGSCB) and Trusted Multi-Net.

These Trusted Computing initiatives are directed towards improvements in the security of user data, attestation of a trust level of a user and/or hardware device, and authentication of a user and/or hardware device, and David provides an introduction in this short paper.

If you know very little about Trusted Computing and would like to learn about it, this paper will put you on the right track. Happy reading!

Posted by SilverStr at 08:32 AM | TrackBack

Microsoft Security - don’t underestimate its secure future

David Cartwright wrote an interesting entry about some of the stuff Microsoft is doing in the security arena. I have pointed out this throughout entries on my blog for some time now... but David has pieced together information about the "impressive security structure" that Microsoft is building.

To summarize his findings:

  • Security on the Motherboard - The Phoenix BIOS / Microsoft connection
  • Hardware protection against buffer overflows
  • NGSCB: Next Generation Secure Computing Base
  • More secure APIs
  • Secure Programming
  • Improved tools for deployment and patching
  • Some features turned off by default; Secure settings turned on by default;
  • Expanded channels of communications with customers
  • Security training for channel partners

Sound interesting? You should read his post.

In the face of recent vulnerabilities found (Eeye marks their latest findings "as the biggest ever"), we have to realize that these are bugs that were written in years ago before Microsoft began taking security seriously. It will be some time before a lot of the best practices, methodologies and technologies start to work in our favour.

Posted by SilverStr at 08:25 AM | TrackBack

NIST releases Final Version of FIPS 199

Today NIST released its final revision of FIPS 199. Entitled "Standards for Security Categorization of Federal Information and Information Systems", it provides for some great reading.

Enjoy!

Posted by SilverStr at 07:46 AM | TrackBack

February 11, 2004

Microsoft holds Developer Security Webcast Week February 16 - 20, 2004

With the recent flood of issues surrounding MyDoom, it is good to see Microsoft take some time to educate developers on secure coding principles and practices.

Microsoft has announced a special week of webcasts (Feb 16-20) addressing the most important and newly emerging security issues surrounding developers. Topics range from corporate security reviews and computer crime to a host of webcasts aimed specifically at developers. These webcasts are designed to help developers write applications that are resistant to security attacks. Webcasts will address a broad range of issues facing developers today: specific coding techniques to make applications inherently more secure, SQL Server considerations, authentication and authorization, Enterprise Security Portals, and protecting your intellectual property with Code Access Security. Tune in as top industry experts walk you through key security concepts that will help your organization -- and the code you write -- rise to the security challenges we all face today.

Here are the webcasts I will be tuning into:

If you want to check out the full list of available webcasts, you can check it out here.

Posted by SilverStr at 10:41 AM | TrackBack

Passive Information Gathering

Gunter Ollmann has released a paper on "Passive Information Gathering: The Analysis of leaked Network Security Information". Information leakage caused by weaker configurations allows an investigator to passively collect data which may not wished to be disclosed. In the paper he gives step-by-step instructions on how to use this weakness and find useful information in things like DNS records, mail server trace paths etc.

Although these are older techniques, he has put that in a way that makes it easy to read and understand. If there is something you haven't known about passive information gathering techniques, you will be sure to pick it up here.

Happy reading.

Posted by SilverStr at 08:23 AM | TrackBack

February 10, 2004

Canada's Cyber Security Resource Centre (CSRC)

Well here is something interesting. The Canadian Advanced Technology Alliance (CATA) as part of its Homeland Security Task Force, announced the creation today of a Cyber Security Resource Centre (CSRC), designed to enhance homeland security by characterizing threats to Canada's information infrastructure. These threats come from increasing incidents of malicious code (viruses, worms, Trojans) and focused attacks.

The first step in this initiative is the launch of the CATA CanCERT Alliance. CanCERT will support CATA by acting as the trusted third party for collection, anonymization and consolidation of cyber-incidents reported by the community.

I am impressed to see Canada step up to the plate. Now we don't need to rely on the US's Department of Homeland Security to tell us what threats Canada is exposed to. (They never got it right anyways)

Good job. Can't wait to start reading the reports.

Posted by SilverStr at 01:23 PM | Comments (1) | TrackBack

FxCop: How the Cop beat me down - A Review

Ok, so I have been talking about FxCop for a while, and during all my trails I really liked the tests it performed. Today I thought I would put FxCop 1.23 through our master sources of a product we are about to release. OMG.

Lets just say my C style of writing is not appreciated by the FxCop rules. At the start this morning, I had 588 violations reported by FxCop. Not a pretty sight.

But then I started reading their "Rules Details" to understand why it was flagged.

I am really impressed with the detail of the rules. In most cases I found the "Additional Information" quite useful, although I will admit the documentation does more to show examples on how to actually resolve the issues.

A majority of the violations failed in a single rule library. As I marshal a LOT of C style structs to P/Invoke native Win32 components as part of my kernelmode compatibility stuff, I guess this should be expected. I found that around 40% - 50% were errors because I would use underlines and capital letters (ie: SECURITY_DESCRIPTOR) which FxCop didn't like. I know I could name it to be more .NET friendly, but I like keeping the consistancy for the structs across the C# and C boundaries of my master sources tree. As such, I "excluded" quite a bit of violations to pass the test. These weren't actually false-positives... they were right. I just believe my style is more right for my needs.

I could have simply turned off that rule check completely... but it did find some internal structs which I could and did change. Something for you to consider: It is better NOT to supress a complete test. Don't be lazy.... analyze every violation and make an informed decision. I personally liked how you could attach a note to each violation you would exclude, allowing other developers to udnerstand WHY I excluded the violation from the tests.

FxCop found two security violations in my code which were focused on the fact I was using a virtual override to WndProc for two forms without setting the LinkDemand. I didn't even know WndProc had a Demand set to it... so it was good to see that FxCop caught it. I spent a great deal of time reviewing this in an attempt to figure out the LinkDemand status of the Control.WndProc... only to have someone point out to me an example where it sets the Demand state, which means I just had to add "[System.Security.Permissions.PermissionSet(System.Security.Permissions.SecurityAction.Demand, Name="FullTrust")]" to pass the test.

The rest of the violations I simply went and fixed. Most were .NET style issues which I am still trying to master/accept (ie:using a property instead of a method) and a lot of localization issues with CultureInfo/IFormatProvider.

After 2 and a half hours, all tests passed. 588 violations were reduced to 0... with about 200 in the exclude list. I will take some time to think about the impact of the naming conventions, and then go tackle the exclude list at a later date. Some things really should be renamed, but are left in so they match the C code in the kernel. (Example... .NET doesn't like a function name of OpenFIFOQue. I can pass that test if I change it to OpenFifoQue. Not a major change, but left in for clarity for anyone who is grepping through the sources to match up code against either side of the kernel)

Now, with such a great tool I did find some things I think could be improved upon. Most of it has to do with the user interface. There may be work arounds for many of this, I just don't know about them yet:

  1. FxCop locks the assembly, which means I cannot simply run Visual Studio and FxCop together. A recent blog entry on their site says you can, but I couldn't get it to work. If I make a change and recompile the code, during a post-build event to copy the executable it fails. I tried closing the project, but keep FxCop open.. and even then for some reason the UI has it locked. I have to completely close FxCop down each time to properly recompile the code. It would be nice to keep both open, and then run "Analyze" after each rebuild.
  2. Since I was constantly stopping and starting FxCop I was getting perturbed that it wouldn't remember the last project it was working with, and reload it. It would be nice if they had an option to "Load last Project on Start" or something like that.
  3. I couldn't find a way to cmd line the FxCop tests directly in Visual Studio for post-build events. It would be kind of kewl if as a post-build event it could run FxCop and barf the violations to the output window.
  4. When reviewing a violation, it has a field called "Source". I couldn't figure out how to get it to point to the offending line. I am guessing it has to do with some sort of debugging condition in the assembly, but I couldn't figure out how to make it show up.
When you think about it... those aren't big issues. Mostly comfort / style issues which are easy to overlook.

All and all I found FxCop a great tool to use. When I have some time I think I will learn how to write my own security rules, and then put them into FxCop for testing on a regualr basis. If I can figure out how to automate FxCop during postbuild events... I will even make it a daily build task.

Thanks to the FxCop team for their great work. Nice tool.

Posted by SilverStr at 12:15 PM | Comments (2) | TrackBack

February 09, 2004

Microsoft releases source code to Allegiance Video Game

Wow. Can't believe my eyes. Microsoft has released the source code to their game Allegiance for anyone and everyone to download.

I have never played the game myself, but there seems to be a loyal following. From some of the screenshots, it looks like a sweet game.

If you are a game coder... you might be interested in looking at their code (Warning... over 511 meg download).

Posted by SilverStr at 11:15 PM | Comments (4) | TrackBack

February 07, 2004

The time to break backwards compatibility is NOW. In Longhorn.

Since my original entry pointing to Michael’s post about "The IE Patch (MS04-004) demystified" I have seen a lot of ridiculous and ludicrous comments in the midst of some great insight. I am only thankful that none of those idiots seem to visit my blog, as I am not sure I would appreciate such dim-witted statements here.

Yes, I’m venting. Mostly because in the midst of Microsoft doing something right as it relates to security, people complain. It wasn’t even a month ago that these same people complained about the IE vulnerabilities... only to find something else to complain about after the recent IE patches. Yesterday on one private mailing list I am on I actually heard people discuss "class action" lawsuits against Microsoft for "loss of profits". Idiots. The moderator of that list sure got a piece of my mind on that one.

But that’s not what this post is about. There are plenty of blog entries and news stories around the world that already point out that RFC 1738 STATES under section 3.3 that the HTTP URL format should NOT include username and password information. Don’t believe me?

3.3 HTTP

The HTTP URL scheme is used to designate Internet resources accessible using HTTP (HyperText Transfer Protocol). The HTTP protocol is specified elsewhere. This specification only describes the syntax of HTTP URLs.

An HTTP URL takes the form: http://<host/>:<port>/<path>?<searchpart>

where <host> and <port> are as described in Section 3.1. If :<port> is omitted, the port defaults to 80. No user name or password is allowed. <path> is an HTTP selector, and <searchpart> is a query string. The <path> is optional, as is the <searchpart> and its preceding "?". If neither <path> nor <searchpart> is present, the "/" may also be omitted.

Within the <path> and <searchpart> components, "/", ";", "?" are reserved. The "/" character may be used within HTTP to designate a hierarchical structure.

Quite frankly... it appears that Microsoft was wrong in breaking the original standards in the RFC by adding the support. And they were right when they removed it. Enough said.

Which gets me to the point of this entry.

When Robert Scoble came to see me recently we got into a discussion about Microsoft’s cardinal rule. "Don’t break the build. Don’t break backwards compatibility." He gave an example that if they simply broke existing software with a patch/change, that action could have devastating effects to Microsoft in client retention, and even bad press. (Sound familiar, in the recent few days?) Steve Ballmer would not appreciate a call from a CTO of a major corporate client screaming that their entire system is broken due to such a change, and as such the cardinal rule is considered their "Prime Directive"… so to speak. That’s interesting.

Why I found this interesting is that this discussion was not only surrounded around IE, but Longhorn. Its launch is still a distant ways off, but this kind of rule SHOULDN'T be part of Longhorn. The time to break backwards compatibility is NOW. In Longhorn.

While that statement sinks in and you prepare to send me a nastygram… let me predicate that by saying I know this comment makes me look like an ignorant outside observer. I am. I acknowledge that. I live in a small box, and don’t look at it from an end user’s perspective... but as a computer security software architect.

Microsoft has made great strides as it relates to designing better security into their operating systems. I have been saying over and over on this weblog that we won’t see any of these significant changes until Longhorn. And I still believe that. Mostly because it takes a few years from the time code is written until it is available to the mass market. We won’t see Longhorn server until at least 2006, and I would bet it’s not really ready until 2007. I base that very fact that in the last three release cycles, there is always a desktop version a year before the server one.

Let’s get back to the topic of this post, as it relates to software development and secure coding. If we look at what Microsoft has been doing as of late, we can see that they have made significant changes to build a foundation for a more secure computing experience:

  1. They have created better error-reporting software. They have found that the top 20% of their errors make up 80% of the problems. Knowing this and capitalizing allows Microsoft to significantly prioritize and reduce bugs that matter the most.
  2. They have created better developer tools to help write more secure software, with release of tools like prefix, prefast, AppVerifier and FxCop. Their only problem right now with this is that they AREN’T letting developers know about them!
  3. They halted product development for a period of time and retrained their developers to code more securely
  4. They audited as much product source code as humanly possible and now have a dedicated lead security person for each component of the Windows source code to watch over code quality as it relates to security. Previously they had a clean up crew come in after the fact and try to sanitize the master sources.
  5. Microsoft has begun to provide more secure defaults when shipping new product. As a clear example we have seen the launch of Windows Server 2003 with a lessened attack surface than previous versions of their server product.
  6. Microsoft now provides better tools such as the Microsoft Baseline Security Analyzer to analyze and audit patch management as it relates to security bugs in a proactive manner.
  7. After major security incidents (like MSBlaster and MyDoom) Microsoft has released tools to help respond and fix possible vulnerable and compromised machines. Although these are not timely enough (IMHO), it’s still good to see.
  8. Microsoft has provided a more definitive patch management cycle to address “patch hell” until their newer products get released that have a significantly lessened attack surface, and have better code quality.
  9. Microsoft will be providing better integrated firewalling with their Internet Connection Firewall (ICF), to be released with the next service pack of XP. Ok this item isn’t about secure coding… but more about "secure by default" mentality.
  10. Microsoft is being more open about the entire security process. And not just for PR purposes. More articles, documentation and transparent communication are now available through MSDN, Microsoft employee blogs, and Microsoft’s Security webcasts.

With all these positive moves there is one thing that is missing. I have arrogantly stated in the past that the NT kernel continues to be brittle, and riddled with insecurities and needs to be replaced. I would like to alter that thinking and say now that it is time that the kernel gets refactored.

This argument comes into play because there was way too much code written and added in an insecure state before Microsoft retrained its teams to think more securely. The line of reasoning that code bloat means less secure software has been around for ever and is based on simple mathematics. As more lines of code are written the complexity rises exponentially and exposes the system to more vulnerability and risk. But this is true of all operating systems... and any code. On the secure coding mailing list (SC-L) we have been spending time recently discussing how to maintain better code quality and design more secure software. It’s not easy.

But I look back on a great article Joel Spolsky wrote in which he stated that Netscape made the single worst strategic mistake that any software company can make: they decided to rewrite the code from scratch. He was right. It is much more cost effective to refactor code that is working and just needs to be cleaned up. And that is something that from a secure coding perspective is much more difficult to accept. It is a WAY better idea to design it from the start securely, threat model it properly and code it effectively. Bolting on security after the fact is much harder. Grafting secure coding practices onto insecure code isn't always a sane approach… as it would be much more effective to rewrite that code entirely. This is where refactoring comes in. You can rewrite sections of code, and remove "dead weight" as necessary.

This SHOULD be done in Longhorn. Although I am confident that most of the kernel has been rewritten by now over the years… I think that there are entire areas of code that have to be removed, or at the very least, refactored. There are entire subsystems within Windows that simply should be torn out, as they have been replaced with better systems that should be threat modeled, analyzed and refactored. This might/will break backwards compatibility with some software. Some people might not like that. Well... Microsoft could follow what Apple did with OSX, and include VirtualPC for free and allow users to run their legacy software in XP or Windows Server 2003 through a sandboxed virtual machine, allowing them to bridge the gap until the software vendor has time to update their products, or the client finds an alternative.

Let me give you an example. Why is it that there was a Network DDE Escalated Privilege Vulnerability in Windows 2000 a couple of years ago? Why the hell were people still using DDE in software for Windows 2000, when OLE replaced it, and then COM replaced OLE and finally DCOM replaced COM. And guess what... in Longhorn DCOM will be replaced with Indigo! Seems like a PERFECT time to focus on the intricacies of Indigo, design and code it properly (which I would gather they are doing now that they have been properly trained), and provide a clear and clean upgrade path to the new system. Yet I know on Microsoft’s Indigo FAQ they state that Longhorn will still include COM+… but upgraded to include Indigo. *sigh*

There are lots of examples of this within the system. If you think about it for a moment, there are examples ranging from the driver framework to the graphics layer that could be ripped, refactored and replaced. Longhorn is the PERFECT time to do it, and the most logical step forward in the evolution of the server operating system from Microsoft. With Microsoft already giving access to Longhorn API, there is no excuse for the learning curve of the new Longhorn API systems to be to difficult to tackle for any developer. Further to this, Microsoft has made great strides to simplify many of the API and reduce the total amount of code that needs to be written. If we can agree that more lines of code means more potential vulnerability, we can use simple mathematics to show risk/return ROI on products being updated to the new system (as it relates to security).

A perfect example is the new Filter Manager that is in Longhorn and is now backported to XPSP2 (and hopefully W2K SP5... done yet Darren? :) ) that is being used for file system filter drivers (FSFD). Filter drivers have been a significant problem in the past for Microsoft. Too many third party drivers (anti-virus, encryption drivers etc) didn’t play nice together and would choke a system. They didn’t scale well, had stability issues and were all around ugly for interoping with other drivers. I know in one case I used to be able to install two separate antivirus drivers and freeze my system! Microsoft hosts "Plugfests" to do interoperability testing to help mitigate these risks… but made a smarter decision and simplified the framework to reduce the actual amount of code you need to write for a FSFD. This forward thinking maneuver will benefit Microsoft significantly… security, stability and performance of the third party code will be increased as well as its ability to interop better. Complex buggy legacy drivers will be a thing of the past… which only helps the Longhorn platform.

Anyways, enough ranting. You get my point. I think a quote I like from Gene Spafford could best sum this up:

"When the code is incorrect, you can’t really talk about security. When the code is faulty, it cannot be safe."

You may now send me your nastygrams. If they are constructive, please post them here. If not… send them to /dev/null.

Posted by SilverStr at 08:34 PM | Comments (17) | TrackBack

February 06, 2004

Blog Down time

Alan needs to take the temporary server down to replace it with the actual one that had problems last month. As such, at around 3:30pm PST, the blog will be down for a bit. No ETA when it should be back up.

I am going to take this occassion to thank Alan for all his work, and Fred and Neil for all their support to Alan to keep the service up and running. Appreciate the effort guys!

See you guys back on the new pumped up server!

Posted by SilverStr at 02:12 PM | Comments (1) | TrackBack

MyDoom (A,B) Worm Removal Tool for Windows XP and Windows 2000

Microsoft has released a tool that will help to remove the MyDoom.A and My.Doom.B worms from infected Windows XP and Windows 2000 systems. Once executed, the tool automatically checks for infection and removes the worm(s) if found. If a machine is infected with MyDoom.B, the tool will also provide the user with the default version of the "hosts" file and set the "read-only" attribute for that file. This action will allow the user to visit previously-blocked Microsoft and antivirus websites.

If you have been nailed with the worm, this might help you clean it up. I applaud Microsoft's efforts... but only wish they would have released this sooner. Most people will have cleaned it up by now!

Posted by SilverStr at 11:23 AM | TrackBack

February 04, 2004

The IE Patch (MS04-004) demystified

Michael has written a post demystifying what went into the latest IE patch. He also pointed out Microsoft's knowledge base article on the subject, with a registry setting to renable this "feature".

Interesting to note that in his first book entitled "Designing Secure Web-Based Applications for Microsoft Windows 2000", he even talked about the fact that developers should not rely on this functionality. Guess those scambling to deal with a work around to the fix should have listened more closely. More interesting is the fact he points to the exact reference in which the RFC specs do NOT support this hacked format... which means Microsoft was right in removing it. (Although they should never have had it in there to begin with... but thats another story)

Posted by SilverStr at 04:56 PM | Comments (5) | TrackBack

Basic introduction to Buffer Overflows

Seems like it is "buffer overflow" day. Found a simple tutorial thanks to JoatBlog about 'overflows' for those of you that really don't understand what its all about.

It is a tutorial surrounded around C, and does a half decent job on providing a beginners introduction to buffer overflows.

Enjoy.

Posted by SilverStr at 09:28 AM | TrackBack

Secure programmer: Countering buffer overflows

David Wheeler has a good article up on IBM's site on preventing today's top vulnerability... the buffer overflow. If you are writing any sort of code in C/C++ on Unix, this is something you should read.

Enjoy!

Posted by SilverStr at 09:22 AM | TrackBack

February 02, 2004

Lego + Monty Python = Laughs for all

Something completely different.

Take your favorite Monty Python movie....

... Take a tonne of lego ...

... and enjoy an AWESOME Camelot spoof video.

Posted by SilverStr at 11:11 PM | TrackBack

OWASP Top Ten for 2004

The Open Web Application Security Project (OWASP) released their update for their Top 10 issues, in an effort to provide a minimum standard for web application security.

Funny thing is, many of these issues fall in the same vein as standalone application security. We can learn something from that. Education is key... no matter what the platform.

Anyways, here are the OWASP Top Ten:

  1. Unvalidated Input
  2. Broken Access Control
  3. Broken Authentication and Session Management
  4. Cross Site Scripting (XSS) Flaws
  5. Buffer Overflows
  6. Injection Flaws
  7. Improper Error Handling
  8. Insecure Storage
  9. Denial of Service
  10. Insecure Configuration Management

Posted by SilverStr at 10:09 PM | Comments (1) | TrackBack

More on Windows Script Host (WSH)

Andrew Clinick pointed out an article he wrote on MSDN about Windows scripting with WSH. Entitled "Providing a Secure eXPerience", he covers most that you would want to know about WSH in XP, stuff which I wish more people knew. As someone who was responsible for writing WSH in the past, Andrew hopes the FBI will take WSH off the list so he can sleep easier at night. For those at the FBI reading this, consider checking out his article and passing it on to the right people over there. Maybe some good will cover of that.

Eric Lippert commented that Software Restriction Policies are available in Windows 2000. So there should be no excuse for not using them correctly across all NT5+ core systems. Eric has stated that he will expand on my tidbits about SRP in a future blog entry on his MSDN blog. Appreciate that.

So there you have it. Happy reading!

Posted by SilverStr at 04:09 PM | Comments (2) | TrackBack