February 14, 2004

Shattering the crystal and poking holes in the black box

Let's shatter the crystal and poke holes in the black box.

Recently there has been some banter between a couple of articles on DevX and O'Reilly which is focused on detailing if Open Source Software is secure, or if it is a fertile ground for foul play. Both articles have some compelling points of interest, and yet both are flawed. It is impossible to defend either side of the equation when both sides are entrenched with "grass roots" style feelings which perpetuate fiction from fact. FUD seems to be a shield for everyone now adays, and quite frankly it never aids in honest and unbiased point/counter-point discussion.

Rather that regurgitate the strong points of either argument, allow me to put this into context from a secure coding perspective. Although it is a tangent from the original baseline discussion, I think if you read through my thoughts here, you will see what I am getting at.

The reality is that both sides miss the fact that THEORY and REALITY doesn't mix when it comes to software engineering of today, especially when talking about the crystal box approach to secure code, and that of the black box approach.

Whenever OSS is discussed in the context of security, the position always ends up leading towards its "golden child"Ö that of strong cryptography. Since the days of Bletchley Park in World War II, encryption ciphers have been typically reviewed for years by experts in the field. The source is available so cryptographers can audit the entire algorithm and build proofs to show its strengths and weaknesses. When NIST decided on building a new encryption standard for Federal Information Processing (As part of FIPS standards) back in 1997 they intelligently turned to the crypto field and had the entire process reviewed. AES has under gone a thorough audit process. It took them over a year to get 15 original algorithms reviewed and submitted for consideration. From there after rigorous testing and cryptanalysis for a few months 5 ciphers survived analysis from experts. Finally in October 2000, 3 years after the project began, NIST announced the selection of Rijndael as the proposed algorithm for the Advanced Encryption Standard.

Sit back for a second and take that in. This open process, from original design specification took YEARS of audit and evaluation from EXPERTS in the crypto field. Consider that as we discuss OSS in general.

Indeed OSS as it relates to crypto is a good thing. But this was because there was stakeholder responsibility involved. Cryptanalysts put their credibility, expertise and jobs on the line in this process. This is not always the case when OSS is written. Many projects are written by CS students in college that like the ideals of the open source movement and want to hackÖ typically for experience, sometimes for fame. There is a sense of accomplishment, but typically not that of responsibility. This of course is not ALWAYS the case, and there are plenty of great OSS software like the Linux kernel, Apache, Samba and OpenOffice that donít follow this at all. Which gets me to my point.

Gene Spafford once said that "when the code is incorrect, you canít really talk about security. When the code is faulty, it cannot be safe." I have used this quote before in other entries because I really think it gets to the heart of the major problem as it relates to secure programming of today. Coding for coding sake is one thing, but designing safe and secure software that our critical infrastructure and businesses use is a totally different beast. And the development methodologies that base around the expectation of developer responsibility falls into different categories, depending on the programmers involved.

It is true that with OSS, anyone can review the code and audit it. In REALITY how many people ACTUALLY do this? Be honest with yourself. When was the last time you went through every line of the Linux kernel? When a security patches is released for Apache, how many of you go through a significant code review? How many of you ACTUALLY just run apt-get or emerge and suck down the latest binaries? How many of you launch Red Carpet and download the RPMs? How is this any different than running "Windows Update"?

You see THEORY and REALITY have no place mixing when arguing points about either crystal or black boxed security. Assuming you are following best practices as it relates to patch management on your systems you grab the latest fix and apply it to your systems. You trust your package source and simply install it. Hey, you might even compile it. There is nothing wrong with that. HoweverÖ you leave that trust in the source. The same source you donít look at when you type "make".

In the past year we have seen the compromise of the GNU FTP server, compromise of Debianís development servers, the attempted compromise of the Linux kernel tree, and the release of parts of Microsoft Windows 2000 master sources. These are ALL vendors of trust. We rely on their best practices to protect us. Be it crystal or black... none are perfect.

Coming back from that tangent for a second lets reflect on actual project/product. Typically (but not always) black boxed software comes from a commercial vendor that has a business interest in seeing it succeed. They are trying to protect their intellectual copyright and possibly wish to try to use security by obscurity. (Which rarely works by the way). Yet they typically have a sense of responsibility in maintaining their software. They have a financial interest in doing so. When looking at this from a secure programming objective however, history has shown these vendors fall flat on their face.

Why? Building secure software has been seen as an impractical goal, because the business has other pressing objectives. Even though secure programming helps them increase efficiency and cut costs in the long run as it relates to the development lifecycle, the burden of company growth has them writing software cheaper and faster which typically isn't of the best quality. But thatís changing.

Although history is riddled with vagrant disregard for secure code quality in the operating systems and applications we use, it is changing because the very industry that accepted this behavior in the past now requires safer and more secure software. If you look at my past entries I have pointed out examples where Microsoft's impressive security structure has continued to build a developer environment fusing secure coding practices into their daily lives. This fundamental shift continues to strengthen the design practices of black box software, which in time should result in a safer and more secure computing environment. We now see companies building commercial black boxed software with proper functional design specs, threat models, and test plans. Code goes through a strict source code management system and gets audited at various levels of development, testing and release. These are all major components to build better software.

With OSS, you rarely see this sort of design thinking into the project. Developers have an itch to scratch and they go do it. They make it work for their needs and hope others get on board with it so it can be refactored, and hopefully audited. SourceForge is full of such projects that rarely get off the starting blocks. More importantly, there are examples of OSS that get used by many in the open source community, but donít have a strong developer following. Don't get me wrong. There are amazingly talented OSS developers out there. I am friends with many of them, and I spent years being part of that community and writing my fair share of code. However, you canít wave the "OSS is better because itís audited" flag when no one cares to get involved with it. Many projects die because there is no one responsible for its growth. Without corporate backing and fiscal responsibility the code is rarely maintained. Successful projects like the Linux kernel, Apache and Samba got there because there was a great developer following... many with corporate backing (in developer time, money or both). And even then many of these projects have taken YEARS to build a system with some sort of respectable code audit facilityÖ which I don't think we can blindly trust. A good example of this was the huge PGP vulnerability that was in "open sourced" code for years before being detected... even though the code went through various different audit processes. Further to this, we have seen the failure of the DARPA funded 'Sardonix' security auditing project going by the way side as security researchers who were part of the project not able to get it going. That's really too bad, as I have huge respect for Crispin Cowan (who lead the project), and would have liked to see him succeed with this.

In the end, code quality and the "correctiveness" of software is determined relative to the specifications and the design put in. Using the term that "open source can be audited" is a futile discussion when people DONíT do it. And many of those that do have no real experience in secure coding practices to do it effectively. There are great examples where I am wrong in that statement (FreeBSD Information handling policies come to mind), but if you look at the entire OSS landscape, I am more generally correct than I am wrong in this statement. Although it CAN be audited... it rarely is. And when it is, it's rarely done by professionals who know what they are doing. (My apologies to the numerous secure programming developers and test engineers I do know that take pride in their work in this area. I am generalizing here, and not referring to you.)

Education is a key culprit to this. Developers are coming out of school and have no secure coding experience. They donít know how to write defensive code, nor do they know how to audit code for such quality. Much of the code quality we expect in software doesnít exist because the quick time-to-market turn around of new software sacrifices quality for quantity. And this problem plagues both camps. It's just viewed differently.

Knowing OSS is rarely audited on a routine basis, let's get back to basics here. Any vendor CAN have their source code audited. OSS uses a free and open access to source code trees, and makes this easy. Black box vendors such as Microsoft use Shared Source initiatives, and pay 3rd parties to audit their code. An example of this was the .NET Security framework audit completed before its release. Some vendors, when selling to organizations such as the government and military, require code audit and correctiveness testing through standard bodies such as the Common Criteria Standard or in house code audit teams. The CSE and NSA have entire teams whose function is to do this. Stating that OSS is better for code audits is a fallacy when you look at those responsible for the code. Code quality is going to be dependant on the designers and the engineers, typically being PAID to do it right. You donít always get that from OSS projects. You can. But you rarely do.

I think both camps are going to see a paradigm shift in the coming years. Especially as more vendors adopt OSS. We see examples of IBM, Apple, Novell and Sun (to name just a few) embracing OSS and putting significant assets... including financial resources into projects. If they do this correctly and don't muddy the development process with business politics, we might begin to start seeing projects have a more focused design structure in the software. The result should include better secure programming practices, which will include better auditing and improved quality as it relates to security. HellÖ I can't wait to actually see such func specs, threat models and test plans for many of the open source projects out there. I would love to read these design docs and learn how they would approach such development and testing. We could learn a lot from their practical experience in these successful projects.

Yet as I say this I look over at Redmond and notice the significant investment it is putting on the table for its own processes, and those for its 3rd party developers. I can note several examples where tools like prefast, prefix, AppVerifier and FxCop are being integrated into our tools, helping us to make more secure software. They are investing in the training of outside developers (next week half my calendar is taken up with free MSDN security webcasts that I am attending) and generally are building a strong foundation for the ďnext generationĒ of black box software.

In THEORY code quality and code correctiveness are enhanced with access to source code. In REALITY that is only the case if code audits are actually done. And done by those that know what they are doing.

Before I end this, I need to take a moment to go on a tangent and discuss responsible disclosure as it relates to crystal and black box security. This is a totally different aspect where OSS is traditionally MUCH better suited. The incentive for full disclosure when new vulnerabilities are found are much more heightened in OSS because people everywhere can see it. In black boxed systems that use closed source, this isn't always the case. We see the time from vulnerability report to fix much longer in black boxed systems because there doesn't always seem to be the same sense of urgency to fix issues. You can see proof of this in the announcements that are presented on lists such as bugtraq. Companies like Microsoft may take up to 6 months to fix issues where as in OSS, the turn around time is rarely past a few days.

Tangent aside, you will note I am trying not to take sides in this debate. That's because I don't think it matters. The point shouldn't be if access to source code is the issue. It should be about the design and audit practices that are applied to the code base. When the code is incorrect, you canít really talk about security. When the code is faulty, it cannot be safe. When code isn't audited, you will never be able to know the difference.

Posted by SilverStr at February 14, 2004 02:04 PM | TrackBack
Comments

Besides, bad code isn't the primary problem with security. Lazy/ignorant/stubborn/"special" people are. The patch for the vulnerability exploited by Welchia/Blaster was out for months. The only reason that the infection was a bad as it got was that most individual's had an excuse for not patching right away and that corporate security has a type of inertia normally reserved for aircraft carriers and 200-ton strip mining equipment which also delayed patching (It needs testing in the lab first, right?).

Posted by: joat at February 14, 2004 08:41 PM

Nice entry; linked to it from my blog.

Was this prompted by my comment on your Longhorn blog? :-)

Posted by: Peter Torr at February 15, 2004 01:18 AM

*lol* No this entry wasn't prompted from your Longhorn entry. Funny thing was that I didn't see that comment as I was in the midst of writing this entry. Scarey thing is... we both pointed to the same article, just for different reasons.

You know the old addage... great minds think alike... so do ours :)

Posted by: SilverStr at February 15, 2004 08:00 AM

Great comments, Dana. Certainly the most lucid and rational comments that I've seen on this debate. IMHO, arguing "open" or "closed" is missing the point. The point is how the project is run. A project doesn't need to be either open or closed to be well--or poorly--run.

Cheers,

Ken

Posted by: Ken van Wyk at February 15, 2004 10:03 AM

Heh - spooky! I've always heard it as "great minds think alike... and fools never differ" :-)

Posted by: Peter Torr at February 15, 2004 11:12 AM
Many projects are written by CS students in college that like the ideals of the open source movement and want to hackÖ typically for experience, sometimes for fame. There is a sense of accomplishment, but typically not that of responsibility.

I could also say that "many non-OSS projects are written by students in college, typically for experience, sometimes for fame. There is a sense of accomplishment, but typically not that of responsibility.". But neither "statistic" is interesting. Sure, in theory, all those once personal, but now Open Source, copies of connect four could be installed across the globe and cause global thermo-nuclear war ... however the reality is far from that theory.

It is true that with OSS, anyone can review the code and audit it. In REALITY how many people ACTUALLY do this?

You are trying to reduce the argument to one of two possible outcomes, one where only the company producing the software audits the code and the other where every user of the code does their own audit. Life, unlike a computer, is not binary.

For instance I rarley look at the ingredients list on food I consume ... but I'm not under an illusion that I would be better off if the ingredients list wasn't there and I just trusted the vendor. And there are many examples of OSS being audited by someone not involved in it's creation. Even Microsoft has it's software "audited" by third parties, they just have to do it without source code ... and they still find bugs.

In the past year we have seen the compromise of the GNU FTP server, compromise of Debianís development servers, the attempted compromise of the Linux kernel tree, and the release of parts of Microsoft Windows 2000 master sources. These are ALL vendors of trust. We rely on their best practices to protect us. Be it crystal or black... none are perfect.

Again you are playing with words, the GNU FTP servers haven't been points of trust for any sane person for a long time. Debian's servers are, to some degree (if you use debian), but to compare securing remote shell access for thousands of people with the kind of remote security needed by most companies (and I include Red Hat SuSE etc. in this list) is apples to rocks ... and then you have to admit that in the final analysis the attacker managed nothing more than a DOS attack. And mentioning nothing, why are you listing an attempt at compromising the linux kernel ... I get thousands of MyDoom attempts per week, which says nothing of the security of my Linux box. And too be fair, from what I've heard of the Microsoft source leak it says nothing of Microsoft security ... they'd been forced to release it to lots of people or have those people "defect" to Operating Systems that they could see the source for, and one of those failed to keep it secure. I'm shocked.

If you look at my past entries I have pointed out examples where Microsoft's impressive security structure has continued to build a developer environment fusing secure coding practices into their daily lives. ... We now see companies building commercial black boxed software with proper functional design specs, threat models, and test plans. ... With OSS, you rarely see this sort of design thinking into the project.

This is exceptionally naive, at best ... and just plain insulting at worst. It sounds like a PR statement, in both content and understanding. "proper functional design specs, threat models, and test plans" don't mean much when the design includes "allow user to run attachments", the threat model is "assume attachments are safe to run if the user clicks on them" and the test plan doesn't include "install app. on 500 machines with the address book of each cross linked and try running an attachment".

Security comes in layers, and is proven over time. And windows XP came with UPNP enabled, and Red Hat has been comming with services unavailable since what at least 6.2? Yeh, I know, MS are promising better default security in the future ... but they promised that from 2000 -> XP as well. And while Red Hat customers could audit (or read audits for) both the bug ridden wu-ftpd and the much nicer vsftpd, and change before Red Ht itself did ... one is expected to just keep trusting Microsoft.

However, you canít wave the "OSS is better because itís audited" flag when no one cares to get involved with it. ... Further to this, we have seen the failure of the DARPA funded 'Sardonix'.

Again, black and white. Sardonix failed, this doesn't mean that no one cares about auditing. Sardonix spent a long time building infrastructure and expecting people to just flock to use the (oh so joyus) WebUI to provide content ... I would argue that this was optomistic, at best. To handwave that "noone audits" because you can't see it is just as untrue as handwaving that everyone audits. In reality you can't tell what is being audited by the developers, so if you know a single "external" developer has audited an OSS project then, until proven otherwise, you can only assume that's got twice the auditing of the "black box" code.



Using the term that "open source can be audited" is a futile discussion when people DONíT do it. And many of those that do have no real experience in secure coding practices to do it effectively. ... Yet as I say this I look over at Redmond and notice the significant investment it is putting on the table for its own processes, and those for its 3rd party developers. I can note several examples where tools like prefast, prefix, AppVerifier and FxCop are being integrated into our tools, helping us to make more secure software.

It probably makes more sense if you add emphasis "open source CAN be audited", if a program changes security role (think something that traditionally got trusted data now getting it from an untrusted source) or someone comes up with a new form of attack (think format string attacks) you CAN audit the application, or you CAN pay someone to do it for you.

In THEORY code quality and code correctiveness are enhanced with access to source code. In REALITY that is only the case if code audits are actually done. And done by those that know what they are doing.

This is not true, if I can download the source at anytime I can say "look at the code for application X, it is horrible and the developer obviously doesn't know what they are doing" then there are significant incentives to enhance the security beyond the bare minimum (assuming you care) under the "threat" of being exposed. It's also possible to actually fix the problem if you do care and you do find anything.

And, yes, I've developed software assuming the world could see it ... and I've audited, and sometimes just fixed, other people's OSS.

Maybe we could distill the discussion to your position as it hasn't been proven that being able to audit is better, therefore it can't be ... where my position is that it hasn't been proven that not being able to audit is better, therefore it can't be.

Posted by: James Antill at February 16, 2004 01:06 AM

I have written a reply post on my blog:

http://www.ryanlowe.ca/blog/archives/001224.php

Posted by: Ryan at February 16, 2004 02:31 AM

Regarding your tangent on the time between discovery of a bug, and the release of a patch, I would note that one point you didn't discuss is regression testing, which surely has a significant impact on the time to release a patch. For example, the sheer number of client systems that run Windows, and the wide variety of applications running on those systems, require Microsoft to do extensive regression testing of patches, and even then, they still have problems with patches breaking people's software. So the line that Microsoft has to tread is between the people who will complain that it took too long to issue a patch, and those who will complain when the patch breaks some vital piece of software. It's not just an academic distinction, and I'm sure it's something that comes up in OSS, too, but IMO the scale of the problem is much greater for vendors like Microsoft, IBM, et. al.

Posted by: G. Andrew Duthie at February 16, 2004 07:59 AM

Dana, this sanctuary rambling is sure different then your thoughts 3-4 years ago :-)

I totally agree with James.

A lot of the myths you throwing out are answered in the OSS FAQ.

Alan Cox has a good article on The Risks of Closed Source Computing.

The OSI has good list of readings. Yes, some of it is slightly irrelevant now, such as VA Linux, but in general it all still applies.

Posted by: Wim at February 16, 2004 11:01 AM

Wim,

Ya my ramblings are different. But that is because I have learned a lot in the last 4 years about the audit process. Think about it for a second. We spent a great deal of time auditing source code as we built a security hardened embedded system based on Linux. Yet after we went through that process, as new fixes came about and the packages changed, we slowly became complacent in our own auditing that I would guess that the team working on it now has done very little (if any) auditing in the stuff now. I remember one of our decisions not to move from the 2.2 kernel to the 2.4 was because we simply didn't have the development resources to go through and strip out the stuff we didn't need, and audit the new base. Yet I know since I left they have progressed towards the the new kernel, and they have less developer resources then when I was there.

I am not throwing out any myths here. And the OSS Faq does nothing to address the real issues here. You are missing my point. It doesn't MATTER if the code is open or closed when discussing code audits and code security. What matters is the development process of the code. OSS can be just as strong, or faulty, as closed source.

Its easy to defend projects that have a large following, and have financial backing. The Linux kernel, Samba and Apache are perfect examples of this. But consider a small ISV that has a small project that customers need, but other developers don't find interesting. Ryan stated that "many eyeballs make all bugs shallow". That may be true... but only if experienced developers actual spend the time looking at it. That just isn't going to happen with most projects. So by simply being open source doesn't MAKE it any better. Nor does it make closed source software any worse.

By the very topic of the entry, and my closing remarks... I thought I was clear that I don't think its about being open or closed. (My apologies if I wasn't) Its about performing the audits... by those who know what they are doing. Otherwise, we wouldn't continue to see show stopping bugs in great OSS like sendmail, pgp etc. I don't fret about the Linux kernel as much as I do the plethora of PHP-based web programs, or secondary services like sendmail and bind. They DON'T get the same code review as the other OSS I have mentioned. Yet most businesses blindly use them, exposing themselves to more risk then they should.

Posted by: SilverStr at February 16, 2004 12:51 PM

But that's kind of the point -- the applications that aren't reviewed as carefully as the Linux kernel or Apache don't have the same security concerns (or users worried about security either), therefore they don't get as much attention.

Purely from a security standpoint, an application is only as dangerous as the worst thing it can do. Can a programmer write a root exploit in PHP? And if so is that the fault of the PHP developers or Apache or the Linux kernel? There are many levels of protection there. Same with Java. Most Java programmers don't have to worry sbout security because they are sandboxed anyway -- the machine can't be exploited if the virtual machine can't, and it's audited by Sun or IBM.

So give concern where concern is due for security. Sure a little utility program could be absolutely broken but is it a security risk? Probably not ... unless it's written in C++ or shell script and running as root, but people shouldn't be doing that anyway. Like I wrote in my blog, most small-time open source developers should consider writing in a sandboxed environment.

Posted by: Ryan at February 16, 2004 02:19 PM

Ryan,

First off, please don't fall into the common OSS trap that OSS=Linux, and as such its about a single platform. Its not. OSS can exist on any platform, which means it can expose risk on any platform.

Example: A bug in a combination of PHP and Apache on Windows would allow an attacker to exploit PHP's ability to view files that reside outside the normal HTML root directory to execute arbitrary code by inserting into the Apache log file a malicious PHP based command. SecuriTeam found this back in 2002... and there were cases of malicious code being uploaded using this attack vector. This vulnerability was in the code for some time.

I know of one company which erroneously assumed that by dropping IIS and going with Apache on Windows they would be secure. Although I do personally believe Apache is a great webserver, it wasn't immune to being used as a vessel to this attack vector. More importantly... a directory tranversal vulnerability that existed in PHP exposed the platform to great risk... allowing remote compromise. BTW... this same vulnerability could have been exploited on a Unix environment just as easily.

This is just one example of where OSS that isn't properly audited is no better (or worse) than closed source. And it can infect any platform that uses it.

I give COMPLETE concern where it is due... in any software that can expose a system to serious vulnerability, especially remotely. You don't immediately need root to access a system. Yet once on it... there is a good chance you can get it. or at the very least, use that platform as a launch point to another vulnerable target.

I agree with you that the ideals of a sandboxed environment can considerably reduce this risk. However, its typically not practical to see many of the open source projects out there to be written in Java. And you rarely see it in C# (although Mono might very wellc hange that). You will continue to see it built from the tools available, based on the knowledge of the programmers in question. The same ones that don't know how to audit the code... or don't care to.

Posted by: SilverStr at February 16, 2004 02:43 PM

"Ryan stated that "many eyeballs make all bugs shallow". That may be true... but only if experienced developers actual spend the time looking at it. That just isn't going to happen with most projects. So by simply being open source doesn't MAKE it any better. Nor does it make closed source software any worse."

But James' main point still stands I think. All things being equal, an open source projects code is available for auditing, while a closed source's code isn't. Whether they are audited or are horribly written are other variables, with one you get to see the guts (and don't have to pay CALs and licensing fees, but that's a different discussion :) and the other you don't (or at least, without selling your soul and signing NDAs up the hoop).

Posted by: Arcterex at February 16, 2004 03:27 PM