January 31, 2005
The 80/20 Rule for Web Application Security
The Web Application Security Consortium has released a guest article written by Jeremiah Grossman (CTO of WhiteHat Security) on "The 80/20 Rule for Web Application Security: Increase your security without touching the source code".
In this article Jeremiah discusses ways to make your website more difficult to exploit with little effort. It's a short, but interesting read.
His basic points include:
All good points, and easy to do. If you work on web apps, you should take a moment to read this article.
PHP Security Consortium
Now here is something interesting. Anyone who really knows me knows I am not a fan of PHP. I have seen WAY to many insecure projects written in the language, and it drives me batty. In my circle of influence I have seen way to many non-programmers (ie: web designers, script writing server admins etc) think they are developers by jumping on the PHP bandwagon... its been rather ridiculious.
The failure hasn't been the language itself (although there is a lot to be said about the weaknesses in the language that almost PROMOTE insecure design), but how its been applied by the programmer.
Well, I was impressed to hear that the PHP Security Consortium was formed this month to battle this. The website says that the consortium is an international group of PHP experts dedicated to promoting secure programming practices within the PHP community. Members of the consortium seek to educate PHP developers about security through a variety of resources, including documentation, tools, and standards.
This is a positive move for the language. Lets hope the effort to educate the PHP community causes a rippling effect and promote the fixing of many of the problems that exist in the tools and technologies that reside there today.
Congratulations to the PHPSC, and good luck.
January 30, 2005
Cracking cryptographically enabled RFID to bypass automobile imobilizers
Here's proof that rolling your own crypto is not a good idea. Attacking an unpublished proprietary cipher that uses a 40-bit key on the Texas Instruments DST RFID, some students at Johns Hopkins University built a system to break the keys on many vehicle imobilizers and the ExonMobile SpeedPass system for refueling stations. The result? They were able to start a car with a DST simulator and then go get themselves some free gas.
They even have some good videos showing their methods. Their response on what could be done to fix this?
The most straightforward architectural fix to the problems we describe here is simple: The underlying cryptography should be based on a standard, publicly scrutinized algorithm with an adequate key length, e.g., the Advanced Encryption Standard (AES) in its 128-bit form, or more appropriately for this application, HMAC-SHA1.
I love this stuff. I wish I could go back to school and work on this. :)
January 26, 2005
Running Windows viruses on Linux with Wine
This wasn't really news worthy as much as it was funny. It seems that NewsForge has an interesting article in which Matt Moen attempts to run some Windows virii under Linux through Wine. The results were kinda funny.
The results? Most of them could run on Wine, and some would even install a payload. But none were able to propogate. I was suprised at that... I would have thought Wine's emulation by now would be better than that. :)
I wish I had that sort of time to investigate this further. You know, run something like Fedora under VMWare, so I can run Wine, so I can see how the malicious code interacts with the system. I wouldn't be surprised if I could get more damaging results. Oh well. I'll let others play and do that.
January 25, 2005
Introduction to Computer Security Slidedeck
While doing some research this morning I came across a rather lengthy, but interesting slidedeck created by Ellen Mitchell on the Introduction of Computer Security.
I have to say, at 165 slides I hope this was an all day workshop. Wow does the slidedeck cover a lot of things! It is really basic stuff, but if you are new to the arena, or have someone who is quite new, they may learn something from it.
Mantra for Intrusion Prevention
"That which can not be detected should be prevented; That which can't be prevented should be detected."
COMP4706 Advanced Network Security - Latest Slidedeck
To my COMP4706 students taking Advanced Network Security you can grab the slide deck I used for my last lesson here.
Remember that scanning hosts that you don't own or have permission to should be frowned upon. And that the firewall example in the slidedeck is NOT an appropriate firewall, even though it could work. Build a CLOSED firewall, not an OPEN one.
See you next class!
Update: Fixed link to point to the right slidedeck. Sorry about that.
January 24, 2005
The Dangers of Using Anonymous Proxies
Yesterday in class I was telling students about the benefits of utilizing anonymous proxies for doing competitive intelligence and surfing to questionable sites through such tools.
joat has a post on the dangers of using anonymous proxies. He brings up some interesting points about the legality of using tools as it relates to several US laws. If you are using such a proxy, you might want to check out his views on it.
January 22, 2005
Book Review - Pragmatic Unit Testing in C# with NUnit
Last week I allocated some time to work on our go forward strategy as it relates to our formal test plan. I have been wanting to put together a documented process on what the development team must continue to work on to increase code quality across the board while at the same time providing an automated framework of testing for the QA team to work with and monitor. As I have mentioned previously on my blog I have taken more and more interest in unit testing over the last year. As I was studying the process of extreme programming I realized it wasn't right for me, but that many of the aspects were still quite appealing. Unit testing makes a lot of sense, but I have found it difficult to find materials discussing how to approach it on an existing code base. It's all nice and dandy to talk about adding tests before or concurrently as you write production code, but how do you integrate unit testing AFTER you have written 10's of thousands of lines of code?
Well, I found out while reading Pragmatic Unit Testing in C# with NUnit, written by Andy Hunt and Dave Thomas. This is a no nonsense book which quite frankly was refreshing. There was no harping about the benefits of extreme programming and was focused completely on developer-centric unit testing... getting to the meat of HOW to test methods within your code. If you do not have a lot of experience in unit testing, this book is for you. And should be read before you tackle something like Test-Driven Development in Microsoft .NET.
The book cleared up a LOT of mis-information that many of the articles I have been reading up on. Now fully understanding how we can use things like mock objects to represent network and database failures, the topic was not only interesting to me, but critically important in understanding how to write USEFUL unit tests, and not just fluff. In other words, we can do more than just test arithmetic type methods and get into real world code being used.
And more importantly, it taught me how to tackle my code base. One test at a time. Each time I work on a new bug, I can write the unit tests for the methods relating to it. Over time, the critical areas which have the most problems will not only be refactored enough to work with unit testing, will have the complete harness already in place. One method at a time. Seems so simple doesn't it? Not so daunting when you look at it this way. I don't know why so many texts on the subject miss that.
The book also helps to explain how to design your projects so you don't have test code lying around hap-hazardly. It shows the right way to set up your test projects, and gives drawbacks and benefits with different approachs.
Of course the book is focused on NUnit and C#, but the theory and logic is just as useful in pretty much any language you use. I will admit seeing NUnit in action with C# all through the book made it extremely easy to understand the examples and write my own inital tests. I think I am going to have a lot more difficulty figuring out how to do unit testing with my kernel code in C. I have been looking at CuTest but the example tests in the README seem weak compared to what I see in NUnit. Time will tell if I can find a similar library for my kernel drivers.
The book is a light read (about 150 pages), done in just a couple of hours. I have spent more time trying the examples and writing my own initial tests to see how effective it was. Over all a great introduction to unit testing that makes sense. And it has convinced me to start adding unit tests to our C# code immediately, the next time we fix a bug or add new code.
Of course now it adds an entirely new can of worms to our automated build environment. Never fear, the next book in the series is on how to do just that... called Project Automation. Looks like I will have to add that to my list of technical reading.
Anyways, good book that was easy to read and understand. Well worth the money. If it even increases the quality of your code by finding one bug through the tests it pays for itself. And that pretty much sums it up for me. If you are a professional developer not using unit testing you owe it to yourself to go through this book. You might be pleasantly suprised.
Channel 9 interviews Neal (with special appearance by me!)
Last month when I was down at Microsoft doing interop testing at the Plugfest I had a chance to hook up Robert with Neal Christiansen, the guy in charge of part of the kernel components I work with on a daily basis.
If you have any interest in this, check out the video up on Channel 9. You will even see a cameo from me. Yes thats right. I'm the token geek in gray thats not the Microsoft employee.
January 17, 2005
Browsing the Web and Reading E-mail Safely using Software Restriction Policies
Michael Howard has released an interesting article on using "Software Restriction Policies" to browse the web and read email safely as an Administrator. I wouldn't recommend this, as I am a serious believer in using least privilege and using runas to elevate privileges as needed (hey, even Michael admits and recommends that). However, if you have to, this is an interesting approach of using the group policy objects to apply limited rights to an application you do not wish to implicity trust.
Anyways, good read. Enjoy!
January 16, 2005
COMP4706 Advanced Network Security Slidedeck
To my COMP4706 students taking Advanced Network Security, as requested by so many of you, you can grab the slide deck I used for my lesson today here.
If you are not one of my students you are also free to download it... but out of context (ie: not being in class) it isn't going to make a lot of sense. Especially when we use parts of threat modeling to analyze risks on network topologies.
For those students that requested it, I included the really poor inked diagram I drew when we were discussing multiple trusted and untrusted zones within a single deployment. Its hard to make out there, but remember that in that scenerio, the database server is in its own trusted zone against the untrusted zone of the LAN. And then the untrusted zone of the client on the Internet has to be controlled into the trusted zone of the LAN hosting the web server.
Anyways, feel free to email me if you have any questions before next class. Remember we will be learning how to scan the ports on the firewall and evaluate vulnerabilities with some pentest tools. This will be needed when you deal with the attack and defense portion of the final exam.
January 06, 2005
The "Higher Security Mindset" - Seven Best Practices to Keep you Safe
Recently I received a couple of emails in regards to some of my posts where I refer to the fact we must have a "higher level" of thinking when it comes to information security. The question in these emails asks just what higher level means... and what it consists of.
I wish I could take credit for this type of thinking, but it really was taught to me by Kevin Day. However, I don't mind passing it on to you to further that knowledge to others. Most of this is ripped from his book "Inside the Security Mind", and I highly recommend you check out the book if you don't already own it.
When looking at infosec as a whole, we got to stop worrying about the next wiz bang security tool and start thinking about security best practices that when followed, will help to keep an organization safe. Even though the security landscape is constantly changing, these practices (when applied) will adapt to the highly dynamic nature of information warfare and allow you to repel your adversaries without much incident. And that is what makes a higher security mindset.
So lets talk about seven best practices, that when applied, will do more to protect you than running to buy the next wiz bang security tool uninformed.
Think in terms of Zones
Zoning is the process in which you define and isolate different subjects and objects based on their unique security requirements. For those uninitiated to the terms, a "subject" is a person, place or thing gaining access. An "object" is the person, place or think the subject is gaining access to. I use the terms generically since when zoning you really could be applying it to anything. A file, a server, or even the physical access to your safe. You have probably seen the concept of zoning in Internet Explorer where Microsoft breaks zones down into the Internet, Local Intranet, Trusted Sites and Restricted Sites. This is just one example of how you can break something into zones. Of course the concept of zoning can be applied anywhere, as long as each zone treats security in a different manner.
Although I have seen most people think of zones in a network-centric manner, it doesn't have to be. It could apply to applications, physical areas and even employee interactions with others as a defense against social engineering tactics.
Anyways, a zone is a grouping of resources that have a similar security profile. In other words, it has similar risks, trust levels, exposures and/or security needs. For example, an Internet facing web server will have a different trust and exposure level than an intranet web site. As such, the two should be in different zones. Though you can have umpteen different zones, typically the most common scenarios involve three zones:
These three zones can apply to almost anything, from network based services, application programming and even physical security layouts.
The trick is separating zones in such a way so that we can maintain higher levels of security by protecting resources from zones of lesser security controls. The separation mechanism between zones could be as simple as a firewall, a piece of managed code or a locked door. The goal is to have some degree of control over what happens between the zones. And have logical communication medians to allow for zones to communicate safely where appropriate.
Theoretically it would be nice to live in isolation and never care about other zones. But in reality, at times some zones will need to be able to talk to others. If we didn't allow that, you wouldn't be allowed on the untrusted zone of the Internet from your trusted zone of your internal LAN. It would have to be severed. The trick is to understand the risks of exposure when communicating between zones, ensuring that some sort of filtering safeguard is working in between to determine what is, and more importantly what is NOT, allowed to communicate through the filter. As an example, there is a much higher level of risk in allowing a direct inbound connection from an untrusted zone to a trusted zone. This is why we have firewalls on our perimeters. (You DO have a firewall between the Internet and your computers don't you????) And the risks are significantly reduced if we place an untrusted inbound connection into a semi-trusted DMZ.
See how this all fits together? Zones give us the ability to reduce risk by applying technical safeguards in a logical manner through grouped resources. How we communicate between a trusted and semi-trusted zone would be different than an untrusted to trusted zone. And we can make better security decisions by understanding that.
I have been using a six-step process that Kevin showed me to apply the zoning concept into the decision-making process for infosec. The following procedures can help in that process:
Since the dawn of time, chokepoints have been a key part of security practices in warfare. A chokepoint is a tight area of control wherein all inbound and outbound access is forced to traverse. Kings of medieval times understood that if you could funnel the enemy through tight doorways it makes it much easier to pour down fiery oils on them. Likewise, its much easier to keep a thief out of your network when the network only has one gateway leading in and out. In the infosec space, chokepoints also grant us the advantages of:
Chokepoints are a critical component of a higher security mind. They greatly reduce the infinite number of possible attacks that can take place, and thus are some of the best tools to use in information security.
One thing to consider when using chokepoints though. They also become single points of failure. As such, it is important to increase the availability measures taken in relation to the number of access points consolidated. As an example, if everyone has to go through a single point to access the Internet, it might make a lot of sense to ensure there is a level of redundancy at the chokepoint.
Applying chokepoints is pretty easy. Here are some simple steps when contemplating chokepoints:
I think Bruce Schneier said it best when he stated that "security is a process, not a product." When looking at security architecture, it is important to recognize that no single device is without flaws. Every significant application, server, router and firewall on the market today harbors some vulnerabilities. Additionally, most of these same resources have a good chance of being misconfigured, unmonitored or improperly maintained. On their own, each object will eventually become a weak link that would allow an attacker to get in. As such, layered defenses are crucial to repel intruders and ensure that any one weakness on its own will not let an attacker in (or out for that matter).
Layered security is a hot topic and I don't have to really go into great detail. But here are a few things you can do to apply layered security in your organization:
Understand Relational Security
Information security involves numerous chains and relationships. Any given object will almost always have a series of relationships with other networks, applications, events etc which will prove to be of great significance to our security considerations. The security of any object is dependant on the security of its related objects, and if we fail to see these relationships, we will be unable to properly address security. This is called relational security.
A server, for example, may be considered safe because it is not connected to the Internet. It is, however, accessible by the administrator's home computer through a dial-up session. The admin's system itself is connected to the Internet through a broadband connection. Thus, by following this chain of relationships, the server is actually connected to the Internet. Following such chains can point out where systems and networks that are considered to be safe are, in reality, vulnerable. And this is exactly how hackers typically gain access to systems. They go in through less secure back doors to gain access to more trusted systems.
Vulnerability inheritance is probably the most vital and yet most neglected security relationship. The level of vulnerability within any object should be considered in relation to the vulnerability of its related objects. A file share between a secure system and a vulnerable system greatly diminishes the security of the secure system. If the secure system is accessible in any way from the vulnerable system, then, to some degree, it will inherit those vulnerabilities.
This is exactly how modern worms breach sensitive systems. Which is why I think its NUTS to have things like nuclear power plants remotely accessable. When will we ever learn!!!
Understanding Secretless Security
The best security solutions are those that rely as little as possible on secrecy for protection. Relying on secrets for security has several weaknesses. For examples, secrets tend to leak out. If you keep your life savings under your mattress, and yet talk in your sleep... your secret may be easily compromised. Secrets can also be guessed. A thief breaking into your house may just look under your mattress during the burglary. If you magnify this problem by a few thousand end-users and several administrators, then you will probably spend more time securing your secrets than securing your valuables. Let's look at some classic examples where secretless security is commonly applied.
Have you ever heard the phrase "don't put all your eggs in one basket"? Never have all your investments in one industry; never rely on a single person to do a critical process; and never, never, assign all security responsibilities to one employee, one system, or one process. And, if you are a security professional, make sure you are never the one with all the responsibilities and power. (Even if you WANT to be a BOFH... don’t)
Separating responsibilities does not stop with personnel, however. This concept applies just as strongly to placing all our faith in one security application, or one security device. If Server X is the only thing protecting our entire company, performing filtering, content management, intrusion detection and authentication, and running VPN and logging, we have a security issue. No system is perfect, and no security device is unbreakable. (No matter how many vendors claim their's is... even when offering rewards to hack it) At a minimum we should have something monitoring and protecting the security of our main security devices.
Here are some standard management practices you can take to divide responsibility within your organization:
If you read my blog at all, you know I typically talk about this when talking about designing secure software. It is more important to test the code execution paths when something fails, than when it succeeds. This same thinking should be applied to the higher security mindset.
Everything is subject to failure, no matter how robust or expensive it is. Such failures often lead to lost productivity and potential security issues. As such, potential failure scenarios should be considered before any new implementation. When programming an application, failures should be made to lock down security. When a network architecture is designed, failures should not result in bypassing security as is commonly done. It should fail "CLOSED" (to not give any access). If a power outage occurs, services, applications and devices should apply security during the reboot process. Consider failures in all devices and services, walk through the contingency plan, and consider the security implications therein. This is especially essential for major failure plans like disaster recovery policies.
I used to get annoyed with Windows Server 2003 when you rebooted after a power failure. (Our battery backup only lasts an hour right now, and doesn’t signal Windows properly to halt *sigh*) It always prompts you for a dialog to explain WHY the system shutdown unexpectedly. Then I realized what a great opportunity this was. This allowed us to consolidate the logs and make business contingency planning decisions about power based on knowledge collected in the audit logs for the Windows servers. Understanding that on failure it rebooted and then prompted the admin before it would startup allowed us to track what was going on at all times. And this has been very useful information as we make decisions on our new office architecture needs in relation to power.
So now that you are thinking...
If you apply these seven security best practices into your daily infosec life, you will really move to a higher security mindset. Policy decisions will be made with a more informed basis and you will be able to adapt to the dynamic nature of the information security field. Although it may not be perfect, it will go a long way to assist in effectively applying the technical safeguards within your organization. And that is what will make you a better security professional. Not because you know how to configure and use the next wiz bang security product. Because you will know how to apply it in the midst of your security best practices to make it work FOR you more effectively.
January 03, 2005
Good Success Quote from Winston Churchill
"Success is not final and failure is not fatal.
Top 10 Threats in 2004 (According to McAfee)
The top 10 malicious threats identified by McAfee AVERT to affect both enterprise and home users worldwide in the 2004 calendar year is an interesting list. In alphabetical order, here are the top threats for 2004:
Interesting list. It is curious in how many are malicious code segments that propagate through known vulnerabilities that could have been patched. Hmmmm... that echoes my previous post doesn't it?
Who REALLY protects the Internet?
This morning I read an interesting post in which Susan says that in response to a 10 year old's question of "Who protects the Internet?", her response is "I would argue that we all do."
Susan always looks on the bright side of things. I, on the other hand, look at the dark side of infosec and have to disagree. In a perfect world, we all SHOULD be part of the solution, but we rarely are.
Every workstation attached to a network has a great influence over the security of everything else in the entire organization. Once connected to the Internet this is compounded 10 fold. Thus the security of information is literally in the hands of those using the workstation, the end-users who rarely care about security. The zombies out there used by botnets are a LIABILITY, not an ASSET. And that liability impacts me. It impacts you. It impacts us all.
You see now I not only have to make infosec decisions to protect my organization from traditional risks, I have to make decisions to protect against the incompetency of lazy administrators or end-users who have no clue how to manage security. In other words, I typically have to include risk management decisions against the very same people Susan believes are the protectors.
Everyone needs to be responsible for their own house. Good security practices require the effort of the community wherein everyone does their part to protect their own systems. Unfortunately reality sets in, and that rarely is the case. Don't believe me? Look back in the last few years. How many vulnerabilities were exploited due to people who DON'T have the latest patches. In many cases, the patch was rolled out MONTHS before the attack vector was utilized. Why aren't we using better patch management? And adding technology like intrusion prevention systems to aid in limiting the risks during the Exposure Window of a new vulnerability?
Probably because such software is not a pancea. Recently I had my own issue in which Shavlik's HFNetChkPro™ Security Patch Management software failed (due to my human error) to effectively protect me. I upgraded to ISA 2004 on my SBS 2003 box, and they downgraded back to ISA 2000. In the midst of this I requested HFNetChkPro to reinstall SP2 for ISA2000. It told me it was scheduled and it even forced a reboot. I (erroneously) assumed the patch was in place. It wasn't. Luckily for me it was found out within a couple of days, before an exploit was found for the firewall. However, even with my vigilant security practices I failed to manage the patches effectively. Patch management software needs to get easier and more reliable for us to take advantage of that. ESPECIALLY for the end-user. You know... those zombies part of that hacker botnet that is spewing forth DDOS against targets like you and me.
I would like to believe we are all doing our part to protect our little corner of the Internet. Unfortunately I am a realist and know this isn't the case. If it was... the massive destructive force of malcious code wouldn't be taking down the critical infrastructure in our society. Has your head been in the sand to not know what I am talking about? Hostile code and poorly designed software has shown us the vulnerable nature of the Internet:
These are just a few examples. Here is a quote from an MSNBC article I recently read on the subject:
Although corporations, governments and other institutions have gotten more savvy at protecting their computers with firewalls and security software, millions of PCs in people’s homes are sitting ducks for invasive software. That’s why the Slammer virus was able to infect 75,000 computers in just 10 minutes. In South Korea, which has the highest proportion of broadband-connected homes—70 percent—in the world, the top three Internet service providers were shut down, bringing virtually all of the country’s e-mail and Web browsing to a halt. Slammer also disrupted the Davis-Besse nuclear power plant in Ohio, froze a 911 emergency-call-dispatching system in suburban Seattle and took down Continental Airlines’ ticketing and reservation systems. The Blaster worm brought down CSX’s train-signaling system in 23 states and Air Canada’s computer check-in service—and some experts speculate that it might have been a factor in the power outage that threw much of the Eastern United States into darkness.We know about these problems, but we are having a hard time dealing with it. Worse yet is that we are exposing ourselves to more risk by connecting these things to the public Internet without the proper safegaurds. WHAT THE HECK ARE SYSTEMS LIKE NUCLEAR POWER PLANTS, TRAIN SIGNALLING SYSTEMS and 911 DISPATCH SYSTEMS DOING ON THE INTERNET IN THE FIRST PLACE?
Many people in charge of these systems are just not getting it. Why? Because security is a process and not a product. (Sorry Schneier) In other words, you can't simply buy a product and be protected. The latest OS isn't going to do it alone. Nor will the latest antivirus. Or firewall. Or IDS. Or IPS. It takes a "higher level of thinking" in which we apply technical safeguards to layer security to defend against multiple attack points. We need to educate the end-user while at the same time simplifying security so that they can get it. If security is thought upon as being too complex, we have FAILED... something is wrong in the designed process.
As security software engineers, we have to bridge that gap between the user and security... in a way that is seen to be CONVENIENT for the user. How do we do that? By applying infosec principles and practices in the DESIGN of secure systems while remembering who is using it... the user. We can't bolt it on later and assume end-users will welcome it. Want an example? Read my Longhorn rant from last year on adopting a least privilege stance for users.
Now I know this next point is going to sound like I am hitting below the belt, but someone has to say it. We have to stop buying security products from vendors that are more concerned with profits over protecting their clients. As an entrepreneur I understand the need for a company to be profitable. And I fully support that. But not at the sacrifice of the client. I am tired of seeing supposed "security companies" popping up who have developers (and executive management for that matter) who know NOTHING about infosec policies and practices. Just because you are a good developer does NOT make you a security expert. If you don't have skin in the game, you SHOULDN'T be leading the development of security software. If you don't understand risk management practices how can you undestand how customers will apply your software to help mitigate their risks? And a company simply shouldn't sell the next wiz bang computer security gadget because its the current fad for the highest CAGR in software sales.
<aside>Jason: I hope that explains WHY part of our Strategic Objective at my company clearly states:
Our mantra of "Custodit Nuntium" is core to our Code of Ethics and we will put the protection of our clients before the protection of our profits, while still being responsible to our stakeholders in the business.
And I stand by that thinking. The success of our company is through the success of our customers, and every aspect of our business is focused on refining processes to achieve this. Even if that means we will go out and buy a competing product (at a similar price point) for our customer if it is the right thing to do.