October 26, 2005

Vulnerability Patching: How Microsoft recently screwed up

I am one to give Microsoft credit when it's due for a lot of the secure programming initiatives that have taken place over the last few years. I have been impressed with much of their efforts, which I have blogged about before.

Today I want to turn the tables and show why I also say that the weakest link in security is STILL the human factor. Even with the new secure programming paradigm at Microsoft, a recent set of patches show that the process is not perfect. Humans are fallible. Even the ones at Microsoft.

Back in April Microsoft released MS05-018, a patch for vulnerabilities in the Windows Kernel that could allow elevation of privilege and denial of service. One part of this patch dealt with issues relating to overflows on how csrss.exe processes fonts.

Basically a vulnerability existed in which a buffer holding a Unicode string for the face name of a font is copied without validating it's
size, causing a potential buffer overflow. This was done by the DoFontEnum() function of winsrv.dll.

The vulnerable code looks like:


push [ebp+arg_4] ; pointer to FaceName Unicode string.
lea eax,[ebp+var_70.lfFaceName]
push eax ; pointer to buffer were string will be copied.
call wcscpy ; No length check... it could crash here

It looks like the developers at Microsoft looked at this and decided that the best way to handle it was to validate the values beforehand. This would have the side effect of also fixing potential issues with other data structures that were also not being validated. So they added a validation function called ValidateConsoleProperties() before DoFontEnum() is ever called. Walla. The reported vulnerability was fixed.

But did they do it right? NOPE.

There is a problem with this. You have to ALWAYS check the values of data as it crosses from an untrusted to trusted boundary before you use it. Since DoFontEnum() could be called elsewhere, all they did was patch against this specific code execution path. And it ends up that they missed the fact that DoFontEnum() could be called elsewhere. And it was, causing the real bug to have to be fixed again in a follow up patch in MS05-049 this month.

But this also showed me a bigger problem. I have a lot of respect for Michael Howard (Security guru working on training developers to write secure code at Microsoft as part of SWI) and can only chuckle at how these bugs must have made him roll in his chair. In his work on writing the 19 deadly Sins of Software Security, there is a chapter on Buffer Overflows, and a section pointing out that you need to "Replace Dangerous String Handling Functions". His writings in "Writing Secure Code" does one better and shows how the string copy functions can be handled better. Yet these principles seem to have been missed by the developers working on the patch. When auditing the code in the kernel, I would imagine that one of the first things they would have done was scan for the likes of wcscpy, and replace them with the kernel safe string functions that already exist such as RtlStringCbCopy. Heck, Microsoft even has an entire page talking about using safe string functions for kernel developers.

It looks like in MS05-049 the developers fixed this properly. The code now looks like this:


push [ebp+arg_4] ; pointer to FaceName Unicode string.
lea eax,[ebp+var_74.lfFaceName]
push 20h ; very important:size !!
push eax ; pointer to buffer were string will be copied.
call StringCopyWorkerW ; this new function will only copy the specified bytes

That looks better. And fixes the bug properly, directly in DoFontEnum() as the untrusted data is being used.

Cesar Cerrudo wrote up a nice little paper on the topic called "Story of a Dumb Patch". Not sure I particularly like the title of it, but it's rather fitting. You can read his paper if you want more in depth coverage on the matter.

Moral of the story: Always validate you data when crossing from an untrusted to a trusted boundary. If you have data coming into a function, ALWAYS assume it's hostile until proven otherwise.

Posted by SilverStr at 11:21 AM | Comments (3) | TrackBack

October 20, 2005

Anyone have an RFC doc for WELF format?

Thought I would try seeing if I could get some blog community involvement in trying to find documentation on what I believe is a logging format similar to WELF.

If anyone knows of the syslog format for Sonicwall's log events when set to "Default", could you please drop me a line at dana@vulscan.com? I wish to write a regular expression for the syslog events but notice it comes in various formats, and I need to get it parsing correctly.

I found a few references to the fact the format is actually WELF, and that documentation is on the WebTrends website... but the document (welf3.doc) is no longer available.

If you happen to have a URL to the log format, please let me know.

Thanks!

Posted by SilverStr at 03:48 PM | Comments (3) | TrackBack

October 17, 2005

Exploiting Windows Device Drivers

Piotr Bania has written a paper on "Exploiting Windows Device Drivers".

Now before you get all riled up and fretting that Windows is doomed, please note as you read through this that for this approach to work, you have to have administrative privileges on the system to install code at ring0. You will need to find a vulnerable driver (ok thats not THAT hard I guess), and for Piotr's method to work it requires that you MUST be in your thread's context at time of exploitation (well thats more an issue with KeUsermodeCallback than anything else).

All little nuggets that make this more difficult to execute in a real-world situation. With that said however, this is a maturing of this attack vector. Due to lack of technical paper on the subject (even though Hogland's rootkit book is now out there), the results shared by Piotr's research will go a long way to fuel more work in this space. In his paper a device driver exploitation technique is introduced, and he provides a detailed description of techniques used, including full exploit code with sample vulnerable driver code for testing.

If you are familiar with IA-32 assembly and have previous experience with software vulnerability exploitation, you might find this article interesting. I would suggest, as Piotr does, that reading the two mentioned whitepapers in his paper be a first step in fully understanding his approach.

Posted by SilverStr at 12:23 PM | Comments (1) | TrackBack

October 10, 2005

"Build Security In": No more excuses in learning how to write secure code?

For years the secure software development community has discussed how to "Build Security In". On various lists that we belong to we have explored principles and practices while at the same time arguing about how we need to document this for all the understand ourside of our circle of influence.

Gunnar Peterson over at Cigital contacted me over the weekend and informed me about some work he has been part of that is "doing it"... not just talking about it.

The US Department of Homeland Security (DHS) has been supporting a project called "Build Security In", and it is now live. As Gunnar puts it, it is a resource of materials for software developers who want to write more secure code. The site has a ton of artifacts, activities, patterns, and so on that developers and architects can use to address security from the earliest stages of development.

From the official statement:

Build Security In is a project of the Strategic Initiatives Branch of the National Cyber Security Division (NCSD) of the Department of Homeland Security (DHS). The Software Engineering Institute (SEI) was engaged by the NCSD to provide support in the Process and Technology focus areas of this initiative. The SEI team will develop and collect software assurance and software security information that will help software developers, architects, and security practitioners to create secure systems.

I took some time over the weekend to go through the materials and I have to say I am impressed. A lot of thought and effort went into doing this right by going deep in the areas that really matter in building secure software. You can look at the "Process Agnostic Article View" to see how this breaks down, but I can give you a glancing overview by showing you the areas they broke it down into:

  • Architectural & Design
    • Architectural Risk Analysis
    • Threat modeling
    • Principles
    • Guidelines
    • Historical Risks
    • Modeling Tools

  • Code
    • Code Analysis
    • Assembly, Integration & Evolution
    • Coding Practices
    • Coding Rules
    • Coding Analysis

  • Test
    • Security Testing
    • White Box Testing
    • Attack Patterns
    • Historical Risks

  • Requirements
    • Requirements Engineering
    • Attack Patterns

  • Fundamentals
    • Risk Management
    • Project Management
    • Training & Awareness
    • Measurement
    • SDLC Process
    • Business Relevance

  • System
    • Penetration Testing
    • Incident Management
    • Deployment & Operations
    • Black Box Testing

Yep, thats a lot of great content. You need to head over there and dig in and start reading yourself.

Happy reading!

Posted by SilverStr at 09:28 AM | TrackBack

October 05, 2005

A lesson for OSS: Nessus drops the GPL

I wondered how long it would take for Renaud to complete the licensing transition from open source for Nessus to closed.

Seems like today is the day. He announced that Nessus 3.0 will still be free of charge (for now), but will NOT be released under the GPL. In his words:

Nessus 3 will be available free of charge, including on the Windows platform, but will not be released under the GPL.
Nessus 3 will be available for many platforms, but do understand that we won't be able to support every distribution / operating system available. I also understand that some free software advocates won't want to use a binary-only Nessus 3.

As a fellow entrepreneur, I understand that he wishes to find methods to increase revenue and protect his interests. But I also think his positioning on his reasons is slightly flawed. His reasoning is that:

"virtually nobody has ever contributed anything to improve the scanning _engine_ over the last 6 years."

I wouldn't doubt thats the case. But this quote to the nessus list bugged me today, and I will tell you why. In May 2002 I formed a company called VulScan Digital Security. My plan was to port the Nessus engine to Windows (keeping the engine still under GPL) and design a more in-depth proprietary management tool for network pentesting to compete against the big boys who were charging insane amounts of money. I was about a quarter of the way complete the port when I ran into some issues with the NASL scripting and I tried to contact Renaud and his crew to point out some issues I found. The help I got? Squat. Nothing. Barely even communicated with me. I only ever got a couple of email responses saying "I was free to do it" when I asked if I could do it in the first place, and a follow up to an issue I found with a quick thanks. At that point I realized I wouldn't be getting any support and I dropped the project. If you can't get support from the original authors it didn't make a lot of sense to carry on.

Now he is pointing out that he received no contributions to his code. Of course not. No one wants to work with someone like that without forking off into it's own project. And we all know how f*cked forked projects normally end up.

Now, Fyodor and the Nmap project on the otherhand, "get it". Any time I have come across an issue and asked for help, Fyodor has always emailed me back in a timely manner and with useful information. And you know what?? I have submitted patches to fix things once I got my head around what the real problem was. The whole raw socket XP SP2 fiasco had a fix within 4 hours of Fyodor and I talking about it. After my patch submission we found that a new ARP caching issue also existed. Only took me another couple of hours to have that written and tested and Fyodor put it into the Nmap base to get Windows people going again. Give and take. THAT's how an open source project should work.

Today Fyodor posted an email discussing how Nmap will not follow Nessus. Thank you for that Fyodor. As a regular nmap user I appreciate that.

I wish Renaud and Nessus all the greatest success in marketing Nessus. Let it be a lesson to all of us though. Open source software is about give and take. If everyone just takes and never gives back, don't assume it will always be there for you. On the flip side, if you manage an open source project and want help, make sure you give respect to those willing to dig in and help. Otherwise they will leave you just as quickly.

Have an interesting open source vulnerability scanner you are working on, or planning to fork off Nessus? Email me at dana@vulscan.com and let me know.

Posted by SilverStr at 03:24 PM | Comments (2) | TrackBack