March 31, 2004
Microsoft Progress Report: Security
Bill Gates fired off one of Microsoft's periodic "Executive E-mail" blasts today to customers to revisits Microsoft's ongoing security initiatives. In the email he goes into discussion on four key areas:
Technology has come an incredibly long way in the past two decades, and it is far too important to let a few criminals stop the rest of us from enjoying its amazing benefits.
March 29, 2004
Patching vs. Intrusion Prevention
I better start this entry off by stating that for a living (what puts bread on my table) I write computer security tools and technology, with my latest research into mandatory access control driven by providing "process rights management" through host-based intrusion prevention. Ya its a mouthful, but basically I have written code that grafts onto the Windows kernel to strengthen the Windows platform by providing application confinement and isolation from the rest of the system. The result is that I can apply the rules of least privilege to resources on a machine and provide a safe containment... isolating suspect or even hostile activities from destroying a system.
The reason I am telling you this is that today I noticed a debate on Network World called "Is patch mgmt. the best protection against vulnerabilities?" in which Shavlik Technologies (for) and Sana Security (against) face off. Its hard to say this without you snickering at me, but I HATE it when you square off two vendors to make an assessment for the information security profession when both have a stake in their position. (In case you didn't know Shavlik sells patch management software and Sana sells intrusion prevention software) lt is typically biased, and slanted towards their product.
Lets get real. The reality is BOTH are right, and BOTH are wrong for different reasons. Lets look at this from an infosec point of view while understanding the mindset of an administrator responsible for the critical infrastructure of an organization.
Patch management is only effective when actually completed on a timely manner to reduce the threat of exposure from attack. If you look at the the most recent trends most attack vectors are built AFTER a patch is released, as it is much easier for an attacker to disassemble a patch to find the vulnerabilities(s) in question, and create new hostile code to exploit it. The "for" camp in this argument state that application and OS vendors don't always tell you what the patch fixes, which means you need to patch against the unknown. Here is the problem with that argument. How can an administrator of a Fortune 100 company blindly patch a system with code he knows nothing about... especially if you KNOW the vendor isn't telling you everything? They can't. Which is why they typically do a staged roll out in a 'clean room' to do regression testing against their existing architecture. And in many cases.. the patches do more harm to their system than good. Countless avenues of attack are meanwhile generated, exposing the business to more risk. The time between patch release and exploit release is shortening, as attackers get smarter in their disassembly techniques.
On the other side, the "against" camp state that because customers are not aware of new vulnerabilities they cannot defend against the new exploits... but host-based intrusion prevention software will solve it. There is a catch they don't want to tell you. Most intrusion prevention systems use a combination of signature based techniques and whitelist databases to determine access control. Problem with this is that new 0 day attacks don't play by these rules, and they can typically get around such techniques. More over, if you use a stringent set of rules of "don't"... you end up with an administrative nightmare trying to tune the IPS to work in your environment.
Proof is in how signature based solutions have failed in other security verticals. Look at antivirus and personal firewalls as an example. The latest CSI/FBI Computer Crime and Security Survey shows that of those organizations that reported breaches in the last 12 months, 98% had firewalls in place, and 99% had antivirus. Yet they were still breached. Does that mean we throw the technology out? No. It just means that they don't work alone, in isolated environments. And how much MORE extent would the breach have been WITHOUT the technology in place?
To properly defend against the digital divide, we need to use a layered defensive posture which includes it all. We should have firewalls, antivirus, network IDS, host-based IPS and patch management. Our decisions have to be of a BIGGER process in the security management lifecycle. (This is why Schneier says security is a process, not a product) Remember when I was talking about the 8 rules of Information Security last year? Using a defensive posture like this touches on almost every rule:
My point here folks is that as vendors, we sometimes seem to use FUD or "tainted" messeging to sell out products. Don't buy into it. (And if you ever see my company do it, please email me with stern warning and point me to this entry) Always consider the bigger picture in your security management lifecycle when evaluating technology. After all... technology is simply an enabler. Its not the solution!
Oh... and if you ARE going to roll out host-based intrusion prevention on Microsoft Windows servers, contact me. I might be able to help you out. :)
March 26, 2004
Network and disk forensics
Earlier this week I was a guest lecturer at BCIT, in which I did my presentation on doing a forensic analysis of of a compromised Linux hard drive. I used the same slide deck that I used to present at FVLUG and SFU, but used a different live demo, actually showing an attack to insert a script into cron that emailed the passwd file. Although the slide deck on its own isn't the most useful, for those students that were there, this will be a great help to merge with the notes they took during the demo.
One of the questions brought up was if this technique could be used on Windows hard disks. Yes it can, but you need to use some of the other tools in sleuthkit. As long as you have a valid dd image, you should be ok.
While looking for some other NTFS toolkits, I found a new one I never tried before called FLAG, which stands for "Forensic and Log Analysis GUI". This has some potential for those people that wish to analyze an NTFS system on a Linux forensic machine. There is even a screenshot of FLAG browsing through the registry offline, as well as another one reconstructing jpegs from a dd image on the fly.
When I have some time I am going to see about cranking out something similar in .NET for Windows. FLAG looks kewl, but to be honest I would like to see something like this for information professionals who use Windows but cannot afford tools like EnCase, Forensic Toolkit, iLook etc or dedicated forensics machine like those from DIBS USA or Digital Intelligence.
Course, I rarely have that kind of time, so that might not get very far. Maybe if I just ported grave-robber, ils, ils2mac and mactime that would go a far ways to letting people do work on Windows. Who knows.
Arcterex... you rock!
There is a saying out there that goes something like "a friend won't tell when you kill someone... but a really good friend would help you bury the body". I just wanted to thank Alan for helping me 'bury a body' so to speak tonight.
In less than an hour, at 11:30 at night I drove out to his place to completely swap out a motherboard, rebuild a kernel and get one of my servers back up and running for an install at the ISP tomorrow. The result? A faster machine I am actually comfortable with taking to the ISP. No video problems. Lots of ram that is properly being recognized. Hard drives running at peak effectiveness. And the kicker... a BIOS that properly handles power management. (Don't ask.. its a LONG story)
Anyways... thanks Alan. You rock!
March 25, 2004
Trip down memory lane
Alan was going through some old home dirs on one of his servers, when he came across an old account of mine from back in 1999.
After being told to stroll down memory lane I did, and OMG. I forgot what my desktops used to look like.
Anyways, for your amusement, here were screenshots I took of my desktop back in 1999.
The first one was my main development machine back then, and the second one was my laptop, which I still use today for some forensics work. (Its only got a 486 capable of 800x600, so it pretty much sucks except for console work)
So, got any interesting shots from back before the turn of the century for me to see?
March 24, 2004
Proactive sales in a small ISV
Eric has posted a great MSDN article on the function of proactive sales in a small ISV.
I love reading his insights on this sort of thing. As the owner of a small ISV myself I completely understand and relate to his writings. This article is no different. Wrestling the question about when and if to get a sales guy is important, and understanding a sales guys traits it quite difficult. He answers this question quite well.
Good job Eric.
Analysis of the Exploitation Processes
Steven Hill (aka:"SolarIce") from Covert Systems Research has released a paper on gaining either horizontal or vertical escalation of privileges through the use of exploitation of particular vulnerabilities in processes.
Discussing things like stack, heap and function pointer overwrites, as well as format string vulnerabilities, its worth reading if you want to see how it is technically done.
You should have a pretty good grasp of gcc and gdb if you want to really understand what is going on. All and all though, its a pretty good read.... although Steven admits its far from being a complete work on the subject.
March 22, 2004
IT Security at Microsoft
It is very interesting to see how their asset protection takes place through a formal risk management framework, risk management processes, and clear organizational roles and responsibilities. The basis of the approach is recognition that risk is an inherent part of any environment and that risk should be proactively managed. They say that the principles and techniques described can be employed to manage risk at any organization.
Its well worth your time to see how they present the information. Although this is not really "new" information, its interesting to see Microsoft so open about it. And man.. their slide decks sure are looking much better now adays.
Handbook of Information Security Management
Personally I prefer to read books in analog( ie: paper) form, since its much more comfortable, but if you are into reading an entire book online, check it out.
Of course, this does give me more incentive to rethink the idea of getting a TabletPC... :)
March 21, 2004
Windows XP Service Pack 2 Preview available for download
I have been running a version earlier than RC1 and it has been pretty stable, but that has been in an isolated environment which is pretty clean. I have not yet moved this to any of my production systems, and don't intent to just yet.
If you are curious about looking at XPSP2RC1 (holy acronym batman) this preview will aid you in planning and testing for the deployment of Windows XP SP2 later this year.
March 18, 2004
Techniques for Securing Private Objects in Your Applications
Have you ever wanted to know how you can extend the Windows operating system's rich security features to your own applications? Well there is an interesting article that was published on MSDN this morning that explains just how to do that.
It is focused more on C++, but is quite useful to all programmers. It provides a good introduction to understanding the difference between things like tokens, security descriptors and ACL, and the author aimed to help make security programming practical and approachable with this article.
If you are doing any sort of coding on Windows and care about securing your objects, I suggest you read this.
Quality software means more secure software
I found a great article/interview on ComputerWorld with Gary McGraw, discussing hacker exploits and the state of software quality.
I just can't wait to get Gary's book. If his writing style is anything like his interview style, it should be really good.
Anyways, if you are into learning Gary's take on building more secure software by writing better quality software, consider reading the interview.
March 16, 2004
Microsoft releases new tool to monitor network ports
Microsoft has released a new tool today called Port Reporter (PortRptr.exe) that will log TCP and UDP port activity on a local Windows system.
On Windows XP and Windows Server 2003 this service is able to log which ports are used, which process is using the port, if the process is a service, which modules the process has loaded and which user account is running the process.
On Windows 2000 systems, this service is limited to logging which ports are used and when. In both cases the information that the service provides can be helpful for security purposes, troubleshooting scenarios, and profiling systems’ port usage.
This is a small, but really useful tool that runs as a service. I suggest you download it and give it a try if you are curious about the ports open on your machine(s).
Neat Windows Shell Trick to Launch CMD Window
Today I learned a neat little shell trick thanks to my buddy Gareth.
Have you ever been browsing in Explorer and want to drop down to a CMD window at that point? Not easy. You can open a cmd window, drag the path in Explorer to the cmd window as a drop target and then go from there. But he showed me a neat hack to the registry to make it a right click menu option!
Steps to add this:
Now if you right click on a folder, you will have an 'Open CMD here' option which opens the cmd prompt right at that location!
Easy hack... really nice result.
March 15, 2004
Trustworthy Messaging at Microsoft
Microsoft has published some information about their internal need for trustworthy messaging, and how they solved it.
They needed to use email for transmitting business-sensitive data but was concerned about security and data privacy. Microsoft's internal IT group deployed a solution they called trustworthy messaging, offering both senders and recipients three levels of protection for their data. These solutions were based on applying Secure Multipurpose Internet Mail Extensions (S/MIME) digital signatures, S/MIME encryption, and/or rights management policies.
Anyways, if you are interested Microsoft has released a small slidedeck (not that useful) and a Word document (much better than the slidedeck) that gives a glimpse at their thought process on the topic.
March 14, 2004
Subversion + TortoiseSVN = AWESOME SCM solution
Ok, so belay my last message about running Subversion along side of CVS. I don't need to. Subversion is working PERFECTLY.
I spent yesterday setting things up... and all I can say is OMG. I have never been so impressed with how quickly everything snapped together as it relates to developer tools.
TortoiseSVN has made all the difference. It is just amazing. It has such slick integration directly into explorer its not even funny. Here is a screen shot of one of my C# code trees checked out of my Subversion repository:
With shell extensions turned on it gives an awesome icon overlay to immediately tell me if a file is out of sync, needs to be checked in etc. As you can see in that shot, Dacl.cs has been altered and needs to be checked in. A simple right click on it and hit "Commit" will do that. Or I could step out a directory or two, and it will recursively find which files need to be checked in, and do it for me.
It uses SSH2 and tunnels nice and cleanly through just about everything. The only issue I still need to resolve is how to get the client to use my ssh keys so I don't have to enter my password on every action. It seems to prompt me twice each time I take an action, and thats a bit annoying at times. Apparently its just a config option to get the TortoiseSVNPlink ssh agent to work... I just haven't gotten around to figuring that out yet.
The Tortoise Merge and Diff utils work great. I have already used it last night to do some compares, and it "just worked" for me. I like it when I don't have to think and the tool does exactly what they are supposed to do without me having to fuss with it.
I am running Subversion in standalone server mode, and NOT through Apache. I prefer to use the rule of separation for such tasks and I was able to do that nicely, configuring it to work as an unprivileged user with no access except to the source code repo. Here are the steps I took to basically get things going on the Linux side of things (using Debian as my base):
Thats it. At that point I have Subversion running as user svn, with the local repo stored in /var/svn, accessable using SSH tunneling for the development group.
To access it with TortoiseSVN I just do a checkout via a right click and then type: svn+ssh://firstname.lastname@example.org/ProjectTakeOverTheWorld/ and go to town coding my Pinky and the Brain Mind Altering Operating System. (Better knowns as PATBMAOS, or Longhorn for short)
I still have to get used to how Subversion works, and I could have done a better job by reading their docs BEFORE importing all my code. I wish I would have known to use their format of repository/project/trunk, repository/project/branches and repository/project/tags for the dir structure. Not sure how that will muck up my branches and tags. Will have to wait and see.
So, end result? I would sum it up as:
So there you have it. Subversion + TortoiseSVN = My SCM solution. And I am quite happy with how it turned out.
March 13, 2004
Sorry Microsoft, but Outlook 2003 doesn't make the cut
I'm sorry but Outlook 2003 is just not the productive application Microsoft wants you to believe it is. I know Robert says its his god-send for productivity, but after using it now for over a month and a half, I have to agree with Chris and nuke it. I just can't stand how it STOPS my productivity. I even tried asking for help earlier on my blog, and got a few concerned emails from people... but no one had a clue how to diagnose and fix this. And I just can't put up with it any longer.
I was looking SO forward to using Outlook 2003 with Business Contact Manager it wasn't funny. I tried EVERYTHING I could think of to get it to work for my needs, but its just impossible. Let me explain why it sucks so bad...
IMAP over SSL polling freezes. It actually HANGS Outlook 2003 on me, preventing me from reading email in other folders as its checking for mail. I can't even flip over to my calander while a polling event is occuring... its just that badly designed. Compound that with the fact that Outlook is polling for me every 2 minutes... it means basically that Outlook is hung MOST of the time.
This is a sad state when an app of Outlook's sophistication can't even handle multithreading between views, folders or even messages. What is WORSE here is that Microsoft KNOWS about this! In Microsoft Knowledge Base Article 826807 they say... and I quote: "When you work with an e-mail message on an Internet Message Access Protocol (IMAP) server, no other action in Microsoft Outlook occurs until the current action that is being processed is finished. If you do not want to wait for the process to complete, the only way to stop the process is to force Outlook to shutdown in Windows Task Manager.". They go on to say it could be a latency issue (not likely ... its on a 100 MB backbone with the server being 6 feet away), and leaves my to shudder in fear any time it tried to get mail.
If you put your tinfoil hat on for a moment, you start wondering if this is on purpose. This doesn't exist on Exchange servers. And their IMAP polling in Outlook Express works fine. Funny how that works isn't it? It's just got to be a conspiracy to covertly get people to move to Exchange. *sigh* Ok, maybe not... but I can't fathem why mail polling is this poorly designed for IMAP in outlook, but works fine in Outlook Express.
So in the interm, I installed Thunderbird and giving that a try for mail. Works great. Super fast, no hanging, I can read messages while its downloading others etc etc.
Now I just got to find a separate app that will control my calander (Mozilla calander sucks), contacts and Business contact management (perhaps I will look at GoldMine or Act again) and sync to my Handspring Prism.
You lost a customer today Microsoft. I wanted so badly to have my desktop running purely the latest and greatest from Microsoft on a daily basis. But when I can't even check my mail without Outlook freezing, its time to move on. Hell, even Pine or mutt would be better than Outlook 2003. And that's saying a lot.
March 12, 2004
Update on my Source Control Trials
Well, if you recall a while back I had a call out about SCM solutions as I wanted to get away from my combination of command line cvs +ssh +cygwin for source control management.
Since that time, I have reviewed several solutions and still haven't come to a system I am completely happy with. They all have pieces that I like, and most have things that I DON'T like.
If it wasn't for finding TortioseSVN, I would have probably went with Perforce. Perforce also works on my Linux servers, and the UI seemed decent enough. I will let you know how it goes as I begin to actually use the new system on a daily basis concurrently with my existing solution. Hopefully within a month or two I will phase out cvs through cygwin and exclusively run through TortoiseSVN with Subversion.
I'll let you know how it goes.
March 11, 2004
Application Security Best Practices at Microsoft
Microsoft founded the Application Security Assurance Program (ASAP) to inventory, assess, and—when necessary—ensure resolution of security vulnerability issues found in line-of-business applications. Topics include the program's criteria for assessing applications, the participants in the review process, the requirements for a secure application environment, lessons learned while evaluating applications at Microsoft, and best practices for enhancing the security of applications in development.
March 10, 2004
Rules of Engagement for Information Warfare
On March 31st of this year, Symbiot Security contends that the rules for information warfare are going to change.
Mike Erwin, Symbiot's president, and Paco Nathan, its chief scientist, have released a whitepaper entitled "On the Rules of Engagement for Information Warfare" which details the idea that best practices for information security in the private sector have focused only on defense, and needs to look at offense to provide complete defense.
Drawing on the lawful military doctrine of necessity and proportionality, they believe that as infosec professionals we have the right to counterstrike hostile intent with the subsequence use of force in self-defense.
In other words, when hostile intent such as a DoS flood begins to attack you, they believe that you have the right to attack them through the escalation of response. (Think DEFCON for the Internet) Although this may seem like immediate and proper response to some, this is flawed. Let me explain why.
In the physical world, history has shown that the necessity and proportionality of response has been riddled with erroneous decisions which have potentially done more damage than good. Escalating DEFCON levels have brought our world to the brink of a nuclear war on various occassions during the cold war because of such actions. Rules, roles and responsibilities have kept intelligence agencies in check as they balance off response with GOOD intelligence... sifting through potentially covert actions from third parties and building a complete threat analysis before attacking. Even this fails... as we have seen in the case of erroneous intelligence that triggered the US to invade Iraq. (I will side step the political minefield here and assume for a moment that the President of the Unites States acted in good faith... we can deal with conspiracy theories later).
Now lets come back from that tangent and reflect on such actions in the digital world. The underlying protocols used to control the Internet were never originally designed to deal with non-repudiation, nor were they ever designed to guarantee upper layer authenticity. In other words, it is quite easy to spoof a target at every layer of the OSI stack, providing multiple attack vectors to any data stream. Based on the probable level of threat, they believe the use of a scale of force to block hostile acts - or degrade the network quality of service for indeterminate acts can be the first steps in analysis of the target... allowing for reflection with a final 'return fire' action to counterstrike the hostile source.
In my opinion the idea is noble, but the implimentation is flawed. The use of such escalation tactics with such agressive counterstrike capabilities in cyberspace has the added risk of failure in automated determination, which will allow attackers to more easily launch strikes covertly against their real intended targets. There is a REASON why in the physical world there is a need for two separate keys and a authenticity launch code for nuclear weapons on submarines. It is to prevent such actions. No such safeguards have been proposed during even the first stage of hostile source analysis. Immediately upon responding to a threat by degrading any sort of service past your own ownership boundaries, you are breaching the remote hosts network, making you no better than the hostile source.
Let me see if I can clear that up. In many cases attacks are launched by zombie platforms, 'owned' remote machines allowing the attacker to not only mask their original location, but also their original intent. It is not uncommon for an attack vector to come through vulnerable and exploited systems that reside on home computers or unpatches remote desktops like those used by roaming users. But here is the problem. Both ends go through networks YOU don't own. When you counterstrike "grandma's" computer", you are also affecting grandma's ISP. And all routes in between. You now are part of the problem, not part of the solution.
But it gets worse. What happens if both endpoint providers use a scale of force to engage in defense/counter-defense? Arming procedures for the escalation of response could go from non-destructive to destructive and non-recoverable actions in mere moments. Without human response... or with human response of those in an operation center that truly don't know of the threats that they are really susceptible to.
The idea is noble, and I too would like to attack the bastards that attack me. However, you cannot guarantee that the proper target is 'in your sights'. From the brief whitepaper it seems that they believe that human interaction is required in the operation center to analyze and escalate response. This would HAVE to be essential... because if you automate it... you will be building the biggest 'cyberweapon' launch pad the Internet has ever seen.
Yet the weakest link in security is the human factor. Checks and balances have to be in place to ensure that there is no misconfiguration... or hostile intent by the person(s) working with the system. The last thing you need is a person that escalates a response against his 'former employer' in a retaliatory strike. Who is going to monitor the monitors? This is a vicious circle.
I don't think its fair of me to criticize Symbiot's ideas blindly without providing alternatives. To be honest, I don't believe I can properly respond to their rules until I understand them better... which won't be until after March 31st. Until then, we can only speculate in how they intend to do it. For me.... this counterstrike approach is ethically wrong. (And in many jurisdictions legally wrong as well) There is no evidence that this is a more effective way to deal with the current threats we are exposed to, and there is quite a bit of historical precedence that indicates it is totally counterproductive. On top of that, its this type of "whitehat" tool which would be turned around and be used by the "blackhat" community as another attack platform. The last thing we need is another commercial product that is specifically designed to attack other systems.
But... that is just my opinion. And I am biased. I write digital defensive tools for a living. You would think I would welcome this... since it would increase my business *lol* But I don't.
And nor should you.
March 08, 2004
Exploiting Software:How to break code
Sorry for the lack of posts. Quite swamped here lately with some of my own work.
I took a quick read through the chapter over lunch, and I must say it looks REALLY good. I am going to need to add it to my list of books to pick up on my next reading frenzie. I am actually heading out in a couple of weeks to relax on a nice secluded island away from computers for a few days... and I might just take Gary's book with me (assuming I get it in time of course)
Anyways, you can head over to Amazon to order your copy today! Let me know what you think of it if you read it in the next week or so, just in case my expectations are too high. (I doubt it, Gary is a kewl guy)
What do you think?
March 04, 2004
Interesting research on application security metrics
Chris sent a good post on SC-L yesterday about some research @stake did on application security metrics. Although the findings are mostly focused on web application security, its still quite interesting. Especially since this was completed before OWASP really took off. I thought I would post it here because I know I am going to reference its findings in the future.
Thanks Chris! Good information here.
Chuckle for the day
Ken pointed out a really interesting comic strip by FoxTrot on the SecureCoding mailing list today:
Ya, I chuckled too.
March 02, 2004
Microsoft Security Guidance Kit CD v1.0
The CD-ROM contains tools, templates, roadmaps and how-to guides in addition to Microsoft's prescriptive security guidance. The kit is designed to help you implement measures like automating security patch installation and blocking unsafe email attachments to help your organization stay protected.
There is no cost for the CD, and Microsoft is covering the shipping too! I already ordered mine, and I suggest you do the same.
March 01, 2004
A friendlier NSA for your Kids!
Well, it seems the NSA have went and redone their website. It actually looks pretty good.
More interesting is the section they now have for kids. I was quite impressed with it. Lots of neat information for the budding cryptanalyst. I am going to have my daughter check it out when she gets home from school today. She might enjoy it.
I only wish the CSE would do stuff like this. *sigh*