September 30, 2004
Why Threat Modeling matters
Today I thought I would drop a post about WHY threat modeling matters. I recently came across an incident in my own software where quite frankly an unknown vulnerability existed because in my mind, a threat model was not being completely thought out. I am not ashamed to admit this was a failing on my part, especially since I lead the entire process. I could easily have swept this under the rug since I caught this before the product was released, but I think there is a good lesson to be learned here that I would like to let my fellow secure software architects see what I have learned here.
What is funny about this is it was a compounded failure based on the traditional old adage of 'get it to market'. In the spring we launched a pre-release program of our Intrusion Prevention System with great success. The software did exactly what it was intended to do, and hold that to UI enhancement issues and a conflict bug with Symantec's antivirus product (unfortunately a COMMON problem in our industry), everything was looking good for an end of summer launch.
Routinely I hold what Joel likes to call 'hallway usability tests'. Basically you pull people in that have no clue about your software and let them muck with it, and watch how they deal with it. I try to do this on any major new feature I add, but since this was a new product I did a complete software test like this with three different groups of people.
I started off by getting it in front of a small .NET user group. The reason simply was that this was my first serious attempt of using C# in a standalone app and I wanted to learn about UI design principles as it related to the platform. The testing went ok, and I quickly realized the UI needed an overhaul... which ended up converted the application to use Microsoft's new Inductive User Interface.
The second set of tests I did was at a local University. I basically monopolized a class and had students try to cause the application to die. Rewarding the top student with the purchase of his or her next term books really seemed to drive it home and I had a lot of fun watching them bust the application. I walked away with some interesting lessons and quickly made necessary changes to deal with that. Although I had a lot of failure code paths for tainted data injection, what I didn't account for was data that matched regex, but simply didn't properly exist. (ie: A file path that was valid, but didn't exist in the proper form. Although I checked to see if the file existed, what I didn't do was VALIDATE how the system did the check. More about this later)
Then I did something stupid. I let feature creep walk in and modified my path testing code WITHOUT properly updating the threat model. What the fix did was expose me to a different more serious vulnerability, which I didn't figure out till after my third set of usability tests.
For my third set of usability tests I invited some colleagues and friends from various industries who were REALLY computer savvy, but were not security administrators. I asked them to beat on the thing silly, and ran a contest for most critical bug found and most bugs found. I also turned the night into a full out LAN party in an effort to get these guys to come out and really get into the nitty gritty of ripping the thing to shreds.
One of the incidents reported didn't immediately become obvious until I was reviewing the bugs in the lab that it was so critical, that it would have me HALT product release until a serious architectural change was made. And you wanna know why? Because I got sloppy and didn't update the threat model when I came across a scenario that I knew exposed the product to greater risk.
Now to be fair, this isn't 'sloppy' as in being negligent. The potential of the original bug from occurring was so small that during risk analysis I determined that the user would already have high enough privileges to be able to turn off or bypass the safeguards for this to occur. Basically you could bypass the entire IPS by simply renaming volumes on the system. ie: if you were protecting the 'c:\windows\' directory, if an adversary remapped C: to D: (making the %systemroot% dir be 'd:\windows' dir) the rules would never match and therefore completely give access to the dir. Such a volume renaming isn't something that commonly occurs... it could possibly break the system if done incorrectly. And only an Administrator could do that... something that would mean they have enough privileges to disable the IPS anyways. But the risk was still there.
Point was that I learned about this and decided that the best course for action was to write code to watch for volume remounts, and then internally REMAP the paths so that it could handle the situation. What I SHOULD have done was do a proper data flow diagram and then a detailed attack tree to look DEEPER into what the real threat was... issues relating to invalid file name pathing. This would in turn come to haunt me later. *sigh*
During the third set of tests, one of the testers came across an interesting situation which relates to short file names (SFN). Basically he stumbled across an interesting problem in which a pathing vulnerability existed with a SFN to long filename (LFN) conversion. It was actually kind of kewl as far as bugs go. If you opened up a target file under protection in notepad, it would properly be DENIED. However, if you opened up the same file in wordpad, wordpad would aggressively try to open it 5 times, and after failing would then try to open it up using different pathing... including using SFN pathing. (You know the naming convention, where they use MYDOCU~1 to represent "My Documents"). Walla... a way to bypass the pathing rules in the IPS. And a situation NEVER accounted for in the labs since we have NTFS disk configurations set up to ALWAYS guarantee LFN. (Didn't learn about that until doing a post-analysis of the whole situation... another practice which you really should do after finding critical bugs)
After realizing that during the third set of tests a major issue with pathing was found... I went through the process of completely threat modeling that entire area of data entry using different scenarios and realized that my kernel code simply wasn't capable of properly handling the conversion. And before you start taunting me and saying "pffft.. just use the path conversion APIs" let me be clear... there is no such luxury as the Win32 GetLongPath() function in the Windows kernel.. you have to roll your own code WITHOUT causing recursive IO. And it wouldn't have dealt with the volume renaming anyways. Actually, there is very FEW luxuries in the kernel... which is why most of us that write kernel code look like we belong in a rubber room, with our hair frayed (if we have any left!) and the 4 o'clock twitch. That's an entirely DIFFERENT story :)
Anyways... the routines to do path matching made no sense, and the volume renaming code was clunky, and prone to weird issues in foreign file systems. (One pre-release customer had a proprietary encrypted file system which I just couldn't properly get control of). The threat model exposed the REAL issue, and I did the one thing that my Board of Directors really didn't want me to do. I halted release of the software. I had to. This was a major architectural flaw and the time to deal with it was now... before the release of the product. In the long run this would SAVE us money. And credibility. And trust. And they agreed.
So I spent a month redesigning the entire data entry routines that handled pathing. With thanks to Neil from Microsoft and Vladamir from Borland (thanks again guys) I found a way to map the files right down to the device volume that is guaranteed to always be valid, and I then rewrote the SFN->LFN conversion routines to properly address any sort of conversion issue. In the end, I fixed a potential larger problem in the future that could actually lead to a real attack vector to bypass the system. Very small chance, but still one I knew about. And you simply CANNOT ship critical software that has a potential vulnerability like this.
When reflecting on this whole experience, besides showing that I am human and prone to mistakes like the next guy, what I learned from this is NEVER cut corners. I knew better. It should never have happened. Doing so ended up causing MORE delays in the product launch which ultimately affects our bottom line. However, what I am PROUD of is that my Board supported my decision to halt the launch. They fully understood and respected my decision that the long term impact of releasing a product that has a serious architectural flaw could very well expose our clients to unnecessary risk which is unacceptable. And something I just won't do.
We also get the long term benefit of fiscal responsibility in the software design and deployment characteristics of the product; it would cost us MORE money in the long run in engineering change requests, updates, education etc if we launched and then had to totally redesign it with backwards compatibility with what was in the field. It only took my half a day to write an immediate patch/fix for the existing pre-release clients since it was a small group of sites. Not something you can easily do when you have tens of thousands of installs.
Now, I am not saying that anyone and everyone should HALT an entire company when you find a serious flaw. And I know there is a huge difference between writing the next greatest RSS reader verses something like an intrusion preventions system used to safeguard critical business resources. It may not be practical to halt a release, especially when you have a tonne of installs already; at that point you need to get out there and immediately protect whats in the field. However, it does give me more insight to some of the decisions Microsoft made when halting development to re-educate their developers. And roll out XPSP2. And get out Longhorn.
Although you cannot see it right away, delays may actually be MORE RESPONSIBLE than releasing software at risk. Delays may actually SAVE YOU MORE MONEY in software re-engineering costs. We are not in an engineering discipline where everything can be guaranteed to be 100% safe. People always try to use the analogy between software and how bridges are built. I think its not a fair analogy. Engineers have had CENTURIES to work on that discipline. We are not even 30 years in the making for practical software design. However, that doesn't mean we shouldn't be RESPONSIBLE for our software. Secure software engineering as a discipline may still be in its infancy... but we shouldn't ignore it. Doing so puts everyone at risk.
The threat in our IPS software is now mitigated. The product is again on its release candidate cycle and I can now look back and reflect on dumb decisions that I made, and the impact that it caused on my company. Nothing too critical; I adapted fast enough to make the right decisions and get the company back on track. And I look to things I have learned from people I respect like Gene Spafford and realize that ultimately I made the right decision. Security is a property supported by design, operation, and monitoring... of correct code. When the code is incorrect, you can't really talk about security. When the code is faulty, it cannot be safe.
Fixing the architectural flaw was the right thing to do. Hopefully our clients will agree. If I would have just threat modeled that area when the first set of bugs came in, I would have found that and saved everyone a lot of headaches.
Hopefully you learned something here. I sure did.
September 27, 2004
On the lighter side of virus writing...
With my last post causing a bit of stir, I think I will lighten the mood a bit with a good drawing from my buddy JD.
September 26, 2004
The minefield of hiring a hacker
If you haven't heard lately Sven Jaschen, the author of many variants of the Netsky and Sasser worms was hired by the German security company Securepoint to be a developer on their security software, including things like their corporate firewall suite.
In recent worms, hackers have been so bold as to include text asking for jobs. Recently I received a resume from a 'reformed' hacker who visits my blog regularly. Let me give you my take on the issue, and once and for all explain why I think it is a BAD idea to hire hackers.
First off, lets get the definition out of the way. In this context being a 'hacker' is not the good connotation where you get around complex problems with interesting code. I, for example, am a hacker (good connotation). What I am not.... is a CRIMINAL (bad connotation).
What's the difference? At the point where you breach someone else's resources without their permission, in my mind you are a criminal. When you leave the perimeter and enter into someone else's realm, which includes the network infrastructure (ie: Your ISPs Internet connection.. remember its theirs.. NOT yours) and you do something unethical and get caught, in my mind you are a criminal.
And in my views, criminals have no business being in the professional field of information security.
Yes, that is an EXTREMELY harsh statement. And its meant to be. But its comes from experience. It comes from reality. And it comes to protection of the profession.
There are many hackers that I know and respect that are amazing coders. They have talents in looking at and deconstructing code in such a different way I could only lust after their expertise. But when you hire a 'hacker', you don't just get his or her amazing talents. You also get their ethics. And ethics are NOT something you can simply turn on and off at whim.
Now, before you go off all half cocked and start spewing forth comments about how Kevin Mitnick is a perfect example of a reformed hacker gone good, let me spare you the trouble. I like Kevin; I have only met him once, and he seemed like a nice guy. I think the educational ambassadorial work on social engineering that he has done since his release from prison has been noble. But I still wouldn't hire him. His curiosity got the best of him, and he got caught. And even though he has served his time and is now considered reformed, the real point is that he served his time for CRIME. What he did was criminal. Clear and of fact. And he admits it. And wishes to move on. And I applaud him for that. He just won't be getting hired by me any time soon.
You see, I subscribe to a code of ethics which does not permit me the luxury of blindly trusting that someone else's own ethics will be changed... and I must make decisions from previous experiences. I avoid professional association with those whose practices or reputation might diminish the profession. I might drink beer with them. Debate with them in the wee hours of the morning in hotel rooms at conferences. Listen to them to learn from their experiences and take constructive criticism on things I may not know, or do incorrectly. I will even work with them as part of security incidents. But I will NOT hire them onto my team. There are amazing people out there that DO have a higher code of ethics, so I don't need or want to waste my time HOPING they have reformed. I have to trust the people on my team implicitly. I will not take that risk on behalf of my team, or my clients. So don't even bother asking. You will not be considered.
Running NMAP across an AIM Stream
Now this is some interesting python code.
I had an email today from Abe Usher, the author of a new IM bot armed with security tools.
Basically he has created a way through AIM to query the bot and get it to call upon security and network tools, including ping and nmap.
You can get some background information and see some screenshots by checking out his blog post on the subject.
What will they think of next?
September 24, 2004
Star Wars - Friday Fun
If you are a Star Wars fan like me, you probably have a hatred for that dumb ass Jar Jar binks.
Thought you might enjoy this. :)
When David gets bored, he says he goes after those pesky ewoks. 'Me sah' not know how you can get bored of shooting Jar Jar...
Risk Analysis and Management Methodology For Information Systems
Javier Cao Avellaneda pointed me last night to an interesting Procedure Handbook called RISK ANALYSIS AND MANAGEMENT METHODOLOGY FOR INFORMATION SYSTEMS which has some interesting reading on an approach to threat modeling. Code named MAGERIT, it seems to be developed based on ISO 13335, ITSEC criteria and ISO 17799.
Javier says it studies the risks that an information system supports as well as the related environment. He defines risk as the possibility of damage or injury to ocurr in the system according to the existing threats. MAGERIT recommends the appropriate safeguard functions and mechanisms that should be taken, in order to know, prevent, impede, reduce or control the investigated risks.
I haven't had a chance yet to do a detailed read, but through a quick glance it looks pretty interesting. At a heafty 200+ pages, its something you will need to put some time aside for. If you are into threat modeling, it might be worth your time to check it out.
September 23, 2004
SBS Users are NOT Second Class Citizens, and RSA Agrees!
You know how a month ago I made a comment about how during evaluation of RSA SecurID for my organization I realized that the costs were just to prohibitive, and I made the comment to RSA (and other vendors) of:
A note to the security vendors out there. Small businesses are not second class citizens! We have security needs just like the big boys. Why is it so hard to believe a small business of 5 or 10 people wouldn't want to implement strong security solutions? Think about that next time you do market research. You are missing a HUGE target demographic and I bet if you looked... you have some easy wins that could increase you sales pipeline.
Susan picked up on that and wrote an open letter to RSA with her own views of this.
Then a couple of days ago I made the comment that I was pleased to see AOL partner with RSA for OTP two factor authentication. They showed they could bring the cost down for small business, and I was hoping Susan's letter might make a difference for us SBSers.
Well it did! RSA has responded to her, and she has posted that response on her blog.
I was wondering why I had so many hits from RSA in the last week. It was weird to see such an incline. I know I have a couple of regular RSA readers, but I had like 10x the normal hits from them. Now I know why.
Anyways, the result? RSA is going to be introducing a 10 seat licensing pack within the next 90 days! Thats the power of blogging at work people! Now, lets hope they take advantage of SBSers like Susan for expertise on rolling out SBS specific wizards and configuration!
Kudos to RSA for listening. You just earned some points in my book.
September 22, 2004
Survivability of RHEL3
"... a full install of a Red Hat Enterprise Linux 3 box that was connected to the internet in November 2003 even without the firewall and without receiving updates would still remain uncompromised (and still running) to this day."Of course 80% of all stats are made up, and this is coming from RedHat... but he brings up some interesting conclusions. I haven't confirmed his findings with reports on Bugtraq to see if RHEL3 has any other vulnerabilities to report... but these seems pretty much right if I recall.
When SANS did its last Survivability Report for Windows the findings showed that it would take only 20 minutes on average for a machine to be compromised remotely, less than the time it would take to download all the updates to protect against those flaws. ZDNet has an interesting article about that already. Of course, we are kinda comparing apples to oranges here, since we aren't doing RHEL3 against SBS2003 (the closest comparision you could make), but its interesting none the less.
So what do you think?
September 21, 2004
AOL now offering Two Factor Authentication
Now this is interesting. According to an article on VNUNET, AOL is now offering RSA SecurID tokens on a subscription basis to any customer who wants to add two factor authentication to their account.
Customers wanting to sign up to the service will have to pay $9.95 for the token, plus a monthly subscription of $1.95. Thats pretty good, especially since the token itself costs about $75 each retail.
I guess when you buy in bulk, you can get some killer deals. Only wish RSA would work with us SBSers. Maybe Susan's open letter to RSA will make a difference.
Hey... we can dream!
September 19, 2004
Open Source Security: Still a Myth?
On Valentines Day of this year I posted an entry I called Shattering the crystal and poking holes in the black box in which I discussed open vs closed security. I followed that up in April with some more information on what some other people were saying about Open Source vs. Closed Source Security. I had some interesting discussions in my comments in both of my posts; there was a lot of polarity around the subject.
Well over the weekend it seems John posted a great article entitled Open Source Security: Still a Myth .
He has the same views that I have on the subject. And his article is a great read. He sums up point for point what I have been saying for years. In the end it doesn't matter if open source systems tend to be more secure than proprietary systems, because on the whole they aren't yet coming close to being "secure enough."
I highly suggest you check out his article!
Microsoft software implicated in air traffic shutdown
According to an article on ZDNet, a three-hour system shutdown that affected South California's airports was reportedly caused by a technician who failed to reboot an MS-based system.
Pointing to an article in the LA Times as its source, ZDNet said that a Microsoft-based replacement for an older Unix system needed to be reset every thirty days 'to prevent data overload', as a result of problems found when the system was first rolled out. However, a technician failed to perform the reset at the right time, and an internal clock within the system subsequently shut it down.
And this is the a critical system used to manage communications of US airspace? WHAT???
But here is the kicker. The blame appears to be in the wrong place. This isn't a problem in a Microsoft product itself, but an application written ON TOP of the Microsoft platform. And to top it off... people KNEW about this issue when they first DEPLOYED IT!!!!!!!!!!
Listen, put the blame where it belongs. On the poor software that was developed, and the poor management of the IT resources to deploy something KNOWN to have this issue. Or at the very least, have better safeguards in the process of managing the system to GUARANTEE a person reboots it on day 29.
Geez. Next we are going to blame SCO for the downfall of the Linux desktop or McDonalds for the downfall of the fruit industry. Get with it. Poorly developed software in other people's applications can't be blamed on Microsoft. (at least most of the time anyways)
September 16, 2004
The User Experience of Identity Management in On-Line Applications
Read an interesting article this morning on The User Experience of Identity Management in On-Line Applications: Relationships Between Ease-of-Use, Design Patterns, and Trust.
The article covers usability issues as they pertain to security mechanisms and their impact on the user experience, trust, and control. The conclusion of the the article is that the connections and interdependencies between UI interaction design patterns, indentity management features, and system ease-of-use and their impact upon user trust are "irrefutable". I get what the author is trying to say, but I don't think its as easy as a blanket statement of saying users will trust an online application more if the system attempts to incorporate humanistic components into its security design techniques and interactions. Then again, looking at all the success of many phishing scams one has to wonder. *sigh*
It would be interesting to know what Joel thinks about this in the face of his position on UI development. I would bet it would be somewhere in the middle ground.
September 14, 2004
Using Graphs to Depict Access Control
Most programmers are familiar with the access-control list (ACL) as a datastructure used for authorization. This morning I read an interesting article that describes using a more robust structure called an access-control graph (ACG).
The author proposes that we use an ACG instead of an ACL for access control. He believes a graph does everything an ACL can do, offers additional security, and provides other useful features not available in an ACL design.
I'm not sure how I feel about this. On one hand pictures are more powerful than words; flat datastructures as depicted by traditional ACLs have a tendancies to get new infosec pros glossy eyed, and are prone to error. Yet on the other hand, I am not sure if the ACG can be used quickly through the "truth table" data checks you can do with an ACL.
What do you think?
September 13, 2004
SBS 2003 Deployment Halted
Well, unfortunately I have had to halt my deployment of SBS 2003 for a while. I really don't WANT to, but I seem to have pushed SBS to its limit in regards to my particular needs.
To be honest, the limitation isn't actually in SBS, but in ISA 2000. Let me give you some background so you can see what I have come up against.
As you may have previously read, I have a need for a SBS 2003 machine that is hosting Outlook Web Access (OWA) and Outlook Mobile Access (OMA) for external parties, clients and virtual employees around the Net. The idea is that I can create a virtual office in our DMZ without having to expose critical business resources not needed by these users to the outside. SBS 2003 looked like a perfect solution, and I went hunting.
To reduce the attack surface of the machine while ensuring strong audit trails, I require that ALL connections coming into these services (actually ALL services except incoming SMTP) be authenticated to Active Directory. My goal is to eliminate the potential compromise of unknown threats that may be exposed from vulnerable code or services that may exist along the code execution path between the OWA front end with IIS to the Exchange backend. It also reduces the risks of poorly configured or unknown services that may be running when they shouldn't be. Since the circle of trust for this group of users is quite small, I have a relative level of assurance that I can mitigate most risks by simply removing the ability to connect to the server anonymously and do bad things that they shouldn't. Be removing the ability for an adversary to even throw a connection request to the IIS box without authenticating, I get that assurance level.
Anyways, I have had the opportunity to discuss with Microsoft my needs, my concerns, and my deployment requirements. What I found out was that there is a design limitation in ISA 2000 that prevents this from working correctly. *sigh*
I am told that the ISA dev team is already aware of this and they made big changes in ISA 2004 to address this. This enhances the security for remote access to OWA by preventing unauthenticated users from contacting the OWA server at all. Knowledgebase article 838704 discusses how this now works in ISA 2004.
So, looks like I am out of luck until ISA 2004 is freely available to work properly with SBS 2003. The GREAT news is that as Susan has reported from her findings at SMB Nation, ISA 2004 will be available FOR FREE with SBS 2003 SP1, and will include new wizards to support it. Only issue is that the roadmap has the availability of SP1 in the beginning of next year.
So what do I do now? Well, knowing Microsoft's normal roadmap delays, I simply cannot wait until then for this project. Chances are thats a year away. (Go ahead and debate the roadmap all you like... I am STILL waiting for W2K SP5 that was supposed to be delivered at the beginning of the year, which includes the new filter manager code) As such, I am going to look at the impact of manually rolling ISA 2004 onto SBS 2003. This has the potential of breaking some security policies on SBS, so I will need some time to reflect on the impact of this. I notice all the SBS sites warn that running ISA 2004 on SBS is "unsupported", but no one says it can't be done.
Guess we will see what happens.
Book Review - Joel On Software
Well over the weekend I finished reading "Joel on Software: And on Diverse and Occasionally Related Matters That Will Prove of Interest to Software Developers, Designers, and Managers, and to Those Who, Whether by Good Fortune or Ill Luck, Work with Them in Some Capacity". Only Joel can make his title so gosh darn "out of the ordinary". Just like him!
Not surprising, the book was great. Of course, it was dejas vous all over again. In case you didn't know, the book is a collection of some of his greatest articles on joelonsoftware.com. What was interesting was reading it AWAY from the computer. Reading Joel while listening to the water break on the beach was an entirely different experience for me. I highly recommend it. :)
Anyways, if you read his articles religiously you will probably not find anything new here. The book's greatest asset is that you are not tethered to your computer to read it! And that made the difference for me. There were articles that I just forgot about over the years, and it was nice to get a refresher.
It also let me look critical at myself, the development process I lead and 'everything technical' around the office. When I originally did the Joel Test a few years ago I got a miserable 2. Redoing the test at the end of last week, I realized I am now firing on 10 out of 12. I am being extremely critical of myself; if I blink just right I can say we have 12 out of 12, but thats not being entirely honest with myself. And I will fix those last few issues in the coming months.
All in all, if you haven't been keeping up with reading Joel on Software, or you are new to him, I would HIGHLY recommend the book. It is an easy and light read which goes by quickly. If you are strapped for cash or don't like reading 'analog', then just go to his site. I would suggest starting with his archive index.
September 07, 2004
Understanding whats behind firstname.lastname@example.org
I find it interesting to watch security researchers blast Microsoft for not communicating with them when the researcher(s) report what they believe is a possible vulnerability. I have enjoyed a great relationship with Microsoft when communicating, but to be honest, I don't normally fire email to email@example.com and actually just email team members.
Robert caught up with Stephen Toulouse on Channel 9 and got an Introduction to Microsoft's Security Response Center (the guys monitoring firstname.lastname@example.org) and got some good info on tape on the process that goes on when you fire them an email.
If you are new to doing security research and want to understand how Microsoft deals with it... comsider watching the video. It should be required viewing BEFORE you blast some venomous lack of disclosure rant on mailing lists like bugtraq. We just ignore you when you do that anyways.
On a side note, why doesn't someone at Microsoft give the rent-a-cops campus security team a different email address and forward email@example.com to the Security Response Center. I bet there is a LOT of lost emails going to the wrong place because of this mix up.
firstname.lastname@example.org != email@example.com
The first goes to the dudes driving the white broncos on campus. The second goes to the Security Research Center. Got it? Good.
Exploit Mitigation Techniques
I just went through an interesting slide deck of Theo de Raadt's recent presentation on Exploit Mitigation Techniques in OpenBSD. Was very kewl.
I really liked his approach here. I wonder if anything will come of it.
September 05, 2004
OMG - Thanks for the bandwidth bump Shaw!
So recently I upgraded my Digital Cable Internet Service for SOHO business so I could take advantage of a view things like static IPs, more monthly bandwidth etc.
Didn't realize I was also getting an increase in speed. How is downloading 547 megs in less than 15 minutes grab ya?
In reality, I am guessing this is probably cached somewhere local on Shaw's network. But none the less... that speed ROCKS!
Now... time to be sucked into the productivity virus vortex that is the new Medal of Honor. Thanks for the link Alan.
September 04, 2004
Comment Spam is going to be the death of me yet
Ok, so after another flood of comment spam I thought I would get off my a$$ and fix the problem. I've wanted for some time to find a way to close comments older than 14 days, and found the only solutions out there for MT require the data to be in a SQL database source. Makes it difficult when you are using a berkley db. *sigh*
So tonight with the help of Alan I thought I would finally get it done. Not such a good idea.
In the process of killing the spammers... I ended up killing all my comments over the past 2 years. Not good. To top it off, the mt-db2sql.cgi segfaulted and we couldn't get it ported anyways.
I will have to beg a plead with Alan to get yesterdays backups of my db. Hopefully he'll be in a good mood when I ask.
September 02, 2004
Microsoft releases interesting slidedeck on SBS 2003
Microsoft released an interesting slide deck today showing how Windows Small Business Server 2003 is an integrated, affordable, network solution for small business. If you are unsure how SBS fits in the mix of Microsoft platforms, you should check it out.
I also found an interesting document called "Networking Basics for Small Businesses" which provides an introductory view of using SBS in your organization. They use a lot of customer profiles to get the point of different scenarios across. Nice touch.
Looks like Microsoft is on a full court press to get more SBS 2003 info out to the masses. If you have interest in this, you should take the time and check out some of the other information they have on it here.
Book Review - Microsoft Windows Small Business Server 2003
So this week I finished reading the reference book "Microsoft Windows Small Business Server 2003", written by Charlie Russel, Sharon Crawford and Jason Gerend. I originally thought this was going to be a dry read, but I was impressed with how useful this was for someone like me. Not knowing how to use SBS 2003 it was refreshing to be able to step through each chapter as I was putting up a server for testing.
What I liked more is that the book explained in an informative yet simple manner things that people screw up when working with SBS 2003. I think I got more out of reading the side bar "Caution" and "Security" alerts than the actual pages.
Would I recommend this book? It would depend on the audience. If you know anything about SBS 2003, I think you will be BORED. However, if you are new to the game, its a great way to start the learning curve. I think a better name for this book would have been something like "Tips, Tricks and Traps on setting up SBS 2003 for the First Time". Ok... lame title... but you get the idea.
I think it will make a great reference book as I play with SBS 2003 some more. Definitely got my monies worth. Of course you have to weigh that accordingly; I find the gaining of knowledge to always be a worthwhile investment.
The book I started reading this week is called "Joel on Software: And on Diverse and Occasionally Related Matters That Will Prove of Interest to Software Developers, Designers, and Managers, and to Those Who, Whether by Good Fortune or Ill Luck, Work with Them in Some Capacity". (Yes what an amazingly long title. I think that's his point!) Very entertaining. Weird reading older articles that I explored online years ago. I'll post a review when I am done.
September 01, 2004
Happy 7th Birthday nmap!
HAPPY BIRTHDAY NMAP!!!
Congrats to Fyodor for hitting another milestone in nmap's life cycle. Great to hear 3.70 got officially released for its birthday! And I am glad I could even be a tiny help in making the latest version work with XP SP2. May the source be with you!
You can learn more about the new release by reading Fyodor's announcement here.