November 30, 2005

Windows Live Safety Center

Yesterday on the Microsoft Security Response Center blog, Stephen Toulouse announced in passing that the Windows Live Safety Center could now clean up against issues in regards to Microsoft Security Advisory 911302.

Now the link in the blog post is busted, but I was able to ferret around and find what he was talking about last night.

I'm not a fan of Microsoft's Live site, as so much stuff was borked when I tried it. Besides the fact it did pretty much nothing on FireFox, I had my fill of customized "portals" in the late 90's with Yahoo. But then I fell on Stephen's post...

It seems that Microsoft has moved some of its latest security features to the Live system AS A SERVICE. I originally thought this might be the OneCare Live beta, but after digging a bit deeper it is apparent this is a different project. The Windows Live Safety Center is a free scanning and PC health service, to help delete viruses and other threats, and a support community to help keep you informed of the latest security issues. That is unlike Windows OneCare which is an automatically self-updating PC health service that runs quietly in the background of your PC. Where OneCare helps give you persistent protection against viruses, hackers, and other threats, and helps keep your PC tuned up and your important documents backed up, Safety Center is useful to check on and fix issues that may need immediate remediation. I can see this service being critical as new exploits bombard Microsoft's support center during an outbreak.

So why is this so interesting? Well, now when your grandma calls you and tells you her computer is acting funny, you can tell her to go to http://safety.live.com. It will install a small ActiveX component on the fly, and then:

  • Scan the computer for viruses using the latest signatures
  • Complete a disk cleaning scan
  • Complete a disk fragmentation scan
  • Do an open port scan
  • Output Computer Information

All the stuff you would normally have to try to get her to do over the phone or via Remote Assistance anyways. What I like about this process is that when I tested it last night, besides the ActiveX control, I didn't have to install or configure ANYTHING. No antivirus. No signature updates. It just worked. When I first hit the site, it was smart enough to know I wasn't running with Administrator privileges and gave me a very easy step-by-step guide on how to switch users and rerun the service.

Software as a service isn't something new. But are applications like Windows Live Safety Center a glimpse as to whats to come?

On related news, Windows OneCare is now available for consumer beta. You can get some more information on this and other related betas over at Windows Live Ideas.

And finally for the RSS fans out there. The Windows Live Safety Center team have a blog feed over on MSN Spaces, although its not very active yet. And guess what? Windows OneCare team has a blog feed too!

Posted by SilverStr at 09:23 AM | Comments (0) | TrackBack

November 29, 2005

Refactoring .NET Code

Today I fell in love with a new dev tool. I'm a strong believer in developer's time ROI, and Joel's position that devs need to have the best tools money can buy.

I also run a small company, and can't always easily make decisions about ROI for tools. So many exist, but so few can show immediate ROI. Today a friend of mine pointed me to Refactor! from DevExpress.

OMG. Just watch this 3 minute overview. Enough said.

I'm not a VB guy, so I can't use the free version DevExpress has out there in partnership with Microsoft. But they have Refactor! Pro which supports C#. I told myself I wouldn't buy any more dev tools until our next product got released. I think I might have to rethink that decision. Damn you Michael!

Check it out for yourself. If you do any sort of code refactoring (and you should be if you are working in code), you owe it to yourself to invest in three minutes to learn how this tool can save you time.

Posted by SilverStr at 11:48 AM | Comments (5) | TrackBack

November 28, 2005

E.T. phreak home

Man those dorks at the SETI project. You gotta wonder if they even did a threat model against the very adversies they are trying to find. They didn't even consider the fact that the Earth could be 0wned if E.T. decides to send malicious data to exploit the SETI clients that are running on the critical infrastructure around the world over the Internet. Or at least, that's what Richard Carrigan, a particle physicist at the US Fermi National Accelerator Laboratory in Illinois, thinks.

Today I saw this 'newsworthy' (*snicker*, must be a slow news day) drivel from a few different sources. First it hit Infosec News where a science correspondant discussed a report written by Mr. Carrigan entitled "Do potential Seti signals need to be decontaminated?". Dr Carrigan wants the SETI scientists to build safety features into their network to act as a quarantine so any potentially damaging signals can be trapped before they infect the internet.

What was more interesting to me though was a blog response by David Bianco on the matter. I think he sums it up elequantly when he states that:

The closest star is about 4.5 light years away from Earth. Assuming that we broadcast complete technical details of the x86 architecture and an entire copy of the Windows OS, along with a comprehensive set of security bulletins and an SDK, the necessary roundtrip time for data travelling at the speed of light would mean that by the time the "exploit" could arrive here, we'd be about 9 years further on. Let's see, 9 years ago, we'd all have been running NT 4 and Windows 95. Good luck trying a Win95 overflow on my XP system! The offsets are wrong now, and new security technologies exist now that weren't dreamed of then (like the non-executable stack). What will we have 9 years from now? I don't know (and neither do the aliens), but I do know the aliens don't stand a chance.

Security is about risk mitigation, not risk avoidance. Worrying about E.T. would be one of the last risks on my mind when I should more worry about the script kiddie that will use a vulnerability in one of those SETI clients to exploit the next nuclear power facility. Don't laugh... I've seen SETI clients in the most secure of places where they shouldn't be.

So stop the presses!! E.T. may be phreaking you soon!

Posted by SilverStr at 03:04 PM | Comments (3) | TrackBack

November 24, 2005

Any MSDE Gurus Out There?

Its almost 2 in the morning, and I am at my witts end. I have been trying to find the answer to this and its driving me batty.

You see, I have a 4 MB installer that does a marvelous job of getting an application I wrote onto a host machine. With that said, its kind of a useless installer, because it uses MSDE and requires a specialized named instance to work. I have a prereq that requires MSDE to be installed, which isn't a problem since on most machines, its SBS with it already there. On XP machines, they can easily download and install it.

Back to my problem though. To add a named instance to MSDE, the documentation state you must recall the MSDE setup and pass it a few parameters. Here lies the first problem. My installer has NO IDEA where the setup executable for MSDE is, if its even on the system anymore. And chances are, neither will the user. So the solution is to distribute it so we can take care of it for the user. I found out you can actually strip the 66 MB MSDE installer down to a 1.8 MB MSI and a 25 MB cab file (SqlRun01.msi and SqlRun.cab). Even with compression though, its adding about 24 MB to my installer. What happened to my nice small app that's easy to download, and takes little bandwidth resources *sigh*

Now that I am distributing the files, my cmd line looks something like this:

SqlRun01.msi INSTANCENAME=MyInstanceName SAPWD="SomePass" REBOOT=ReallySuppress DISABLENETWORKPROTOCOLS=1 DISABLEAGENTSTARTUP=1 DISABLETHROTTLE=1

And it works. But I HATE IT. I have to now distribute a huge file to support this. There has to be another way to create a named instance in MSDE. Does anyone know of a way to do this programatically without having to ship the CAB file??? If you do, please email me or leave me a comment. I think its rather unacceptable to have to ship a setup installer of someone elses product that is already installed just to configure it. I must be missing something simple here. Guru knowledge very much welcomed.

Posted by SilverStr at 01:53 AM | Comments (5) | TrackBack

November 22, 2005

The Cost in Fixing Bugs and How Irresponsible Disclosure doesn't Help the Matter

First off, a disclaimer. I am not a Microsoft employee, nor do I ever expect to be one. The views on this post are speculative at best, and down right wrong at worst. I have no real idea of how Microsoft makes its decisions when it comes to their software and security development life cycles, and base my assumptions on what I know, and what I have seen in the industry. And with that disclaimer out there, let me start a discussion on the real thing that drives how and when bugs get fixed, and how security considerations come into play.

In case your head is in the sand and you didn't realize it, software companies are BUSINESSES. That's right. They are in the business to make MONEY. Don't get shocked by this. Don't dwell on it. Accept it as a fact of life. Hopefully the software companies you work with day to day try to build quality product that solves real pain points for you. If they didn't, then you probably wouldn't care much, and have very little vested into their success as they aren't doing anything to help you (at this time anyways).

I could dig deep into discussing the economics of running a software business, and how software is (and always will be) shipped with bugs, but I don't have to. Eric Sink wrote an excellent article on My life as a Code Economist. In the article Eric brings up an interesting point:

The six billion people of the world can be divided into two groups:
     1. People who know why every good software company ships products with known bugs.
     2. People who don't.
Those of us in group 1 tend to forget what life was like before our youthful optimism was spoiled by reality. Sometimes we encounter a person in group 2, perhaps a new hire on the team or even a customer. They are shocked that any software company would ever ship a product before every last bug is fixed.

I will let you read his excellent blog to learn more about how he came to that conclusion, but would like to pull an important aspect from his post on the matter. There are four questions that a developer needs to ask themselves about every bug they are faced with:

  1. When this bug happens, how bad is the impact? (Severity)
  2. How often does this bug happen? (Frequency)
  3. How much effort would be required to fix this bug? (Cost)
  4. What is the risk of fixing this bug? (Risk)

Questions One and Two are about the importance of fixing a bug. Questions Three and Four are about the tradeoffs involved in fixing it. And you need to consider them all when looking at deciding what the right thing is to do for the customers using that product.

So what does this have to do with security and Microsoft? Well, Questions One and Two are covered off on Eric's site as well as articles such as Hard-assed Bug Fixin' by Joel Spolsky (Joel On Software). One of my favorite quotes from Joel was in his bug fixing article when he pointed out that:

Fixing bugs is only important when the value of having the bug fixed exceeds the cost of the fixing it.

Remember that quote. We will be coming back to it.

According to Joel, in the early nineties there was a financial reorganization at Microsoft under which each product unit was charged for the full cost of all tech support calls. So the product units started insisting that PSS (Microsoft's tech support) provide lists of Top Ten Bugs regularly. When the development team concentrated on those, product support costs plummeted. So bugs started to be prioritized by the cost impact to the product unit.

But something changed in the last few years. When it came to security, it wasn't about the COST to fixing the bug as much as it was the criticality and severity of the bug. It seems Microsoft reorganized their Security Bulletin Severity Rating System at the Microsoft Security Response Center to align with how and why bugs got prioritized within the product groups. The more critical a bug was, the higher the priority was to fix it in the grand scheme of things.

Now lets look at the Microsoft Internet Explorer "window()" Arbitrary Code Execution Vulnerability that is running around on the web. In my last post entitled "Again with the Irresponsible Disclosure - 0-day IE exploit in the wild" a lot of the comments by my readers were about the fact Microsoft had 6 months to fix the original bug, and that it was ok that some security researchers acted irresponsibly and posted a NEW attack vector using this original bug as a base without informing Microsoft.

Six months is a long time. No doubt about it. It would seem that there is no excuse on WHY Microsoft didn't have a fix for the original denial of service bug by now that we can easily see. Unless we consider what goes into fixing a security bug.

Most people who consider themselves technically savy (and may even be developers) can't fathem how hard it would be to simply make a change in the code base and fix it. Typically, that flawed arrogance is because they haven't actually worked on a code base with millions of lines of code. Or have had to test against so many deployment scenarios as Microsoft needs to.

Back in June eWeek ran an excellent interview with Microsoft's MSRC program manager, Stephen Toulouse (Personal blog, Company Blog) . There was a piece in that article which I think has some bearing on this discussion.

In some cases, particularly when the Internet Explorer browser is involved, the testing process "becomes a significant undertaking," Toulouse said. "It's not easy to test an IE update. There are six or seven supported versions and then we're dealing with all the different languages. Our commitment is to protect all customers in all languages on all supported products at the same time, so it becomes a huge undertaking."
"This is exactly why it can take a long time to ship an IE patch. We're dealing with about 440 different updates that have to be tested. We have to test thoroughly to make sure it doesn't introduce a new problem. We have to make sure it doesn't break the Internet. We have to make sure online banking sites work and third-party applications aren't affected," he added.
Internet Explorer updates are also cumulative, meaning that they address several newly discovered vulnerabilities and all previously released patches, causing even more delays when the new fixes are bundled into older updates.
"This is why it takes so long, but that's not to say that if there's an exploit, we won't accelerate testing and get it out there as fast as we can. But if we find problems in the testing phase, it could trigger a restart and cause even more delays," Toulouse said.

Think about that for a second. A single code change has to go through all that. It could take weeks.... even months, depending on the scope and impact of the change. And to top it off, Microsoft is a business that has to weigh everything accordingly. Remember the four questions Eric Sink brought up? Severity and frequency help to determine how immediate the fix may need to be, and how costs of producing the fix, weighed against the risk of breaking something else, means that a lot more investment must go into the fix that you would originally think. If Microsoft ranked the DoS of the original bug as Low or Moderate, it may not see the light of day right away as other bugs will take precedence.

Remember Joel's quote about how fixing bugs is only important when the value of having the bug fixed exceeds the cost of the fixing it? Can you imagine now how much "cost" is associated with this bug now that its severity has probably been elevated to Critical at Microsoft? We can probably expect a fix to be coming soon. I bet it's jumped queue and has been reprioritized as something major. Is this evidence that the security researchers were the catalyst by releasing the PoC exploit, proving that responsible disclosure isn't required? I don't believe so, and let me tell you why.

The potential financial impact of this action goes beyond the cost to Microsoft for fixing the bug. Or the cost to Microsoft for testing it. Or the cost to Microsoft for the loss of trust by customers. The real financial impact could end up falling to us, the end users of the product. One reader stated in my last post that "Now itís all out in the open, at least we know how to counter the threat, and perhaps now Microsoft can finally give it the proper due care and attention it deserves." There may be a flaw in that thinking. The impact to the real world businesses out there could end up being more severe on an averaging basis. If a malicious payload using this new attack vector comes across the Internet it can have a huge financial impact on businesses as they have to pay to repair the damage. Labour costs, lost productivity and lost credibility of the business could do more harm than good. I don't see that as a benefit. Do you?

What about the fact that maybe now "Microsoft can finally give [the bug] the proper due care and attention it deserves."? Well, if the researchers would have reported the NEW attack vector to this bug to secure@microsoft.com I can pretty much guarantee (using their history over the last year as a base) that Microsoft would have taken swift action to firstly reprioritize the bug, and secondly take action to mitigate it. But Microsoft didn't get the chance to do it. As a DoS bug, maybe Microsoft felt the value of having the bug fixed did not yet meet the cost of the fixing it. Maybe their is a plethora of more critical bugs that need to be fixed first. I don't know. I don't work there. But I do know this... the potential financial impact NOW thanks the the UK security group is much higher than it was a few days ago. Real risks against our environments are exposed by their actions. As a whole, we didn't gain from this. Microsoft certainly didn't. But the UK security group did.

Oh wait... wasn't that their intent in the first place?

Since they gain from it... should they also be liable for any damage it causes? Or should Microsoft? Un oh... now we are getting into liability and the software industry.... an entirely different can of worms. Something beyond the scope of this discussion.

So what exactly am I saying? That its ok that Microsoft didn't fix the original bug in a timely manner? Nope. Far from it. My point is that fixing bugs has a cost base to any software business. And to every business that uses software. We need to understand that it doesn't happen in a vacuum. Next time you want to yell at Microsoft (or any software vendor for that matter) for not fixing bugs, ask yourself what the REAL cost is to you if they don't ship you a patch right away. Or the cost to you if they ship you a shotty one. Consider the current risks to your business because of the flaw, and the financial impact the flaw actually has on you. Now consider irresponsible disclosure and security researchers releasing exploit code that can assist in the creation of malware before the vendor has a chance to make a patch. How does that HELP you? if anything it may accelerate the release of a patch. As we saw earlier in the eWeek review though, this might actually help cause an ineffective patch. And that is no good.

Irresponsible disclosure just doesn't help the situation. We should not be so willing to accept it as being an acceptable practice.

Posted by SilverStr at 09:16 PM | Comments (18) | TrackBack

November 21, 2005

Again with the Irresponsible Disclosure - 0-day IE exploit in the wild

It hasn't even been 3 months since my last tangent on Why Responsible Disclosure should trump 'Glory Hounding', and we see it yet again.

A UK security research group calling themselves "Computer Terrorism" has released a proof of concept exploit against patched versions of Internet Explorer. The vulnerability has been known for a few months now, but it has so far been treated as a denial of service (DoS) vulnerability. The author of this PoC figured out a way to use this older vulnerability to execute code.

The PoC simply launches calc.exe as the user of the browser. However we took this further and confirm you can pretty much do anything you want with it. We were able to download a nasty payload and nuke a few VMWare sessions in the making.

The current mitigation strategy is to turn off javascript, or use an alternative browser like Firefox. However, twice Firefox locked up on me as I was testing this, requiring me to kill the process and restart it.

Susan posted that "Yes the sky is falling". She points out if you run with least privilege that this won't do much. I disagree. We were able to delete the user's entire contents in the My Documents folder by simply clicking a link in OWA. With a bit of phishing type techniques, this could get ugly fast.

But that's not the point of this post. What I am vexed about is that this PoC is out in the first place. Where was responsible disclosure in this matter? People are scrambling about this one, and no clear message is coming out of Microsoft. You can't blame them, they are in reaction mode in the security center right now looking into this.

Side bar: As I am writing this, I just got an email with a link from someone showing a PoC that opens a remote shell. *sigh*

As I said three months ago, lately it seems far to many people are in a rush to get their name out there instead of following responsible disclosure rules as it relates to reporting vulnerabilities in software. And now this is being fueled as every security incident website carries this, with their own little piece of info to boot.

STOP IT. Act responsibly. Talk to the vendor. Give them the PoC. Let them release a patch within a respectible time frame before you go public. Let the users and administrators have a chance to fix this before it does damage in the real world. You are not helping the industry. You are hindering it. *sigh*

To late for this one. Time to back to pine, gopher and lynx I guess.

Posted by SilverStr at 12:04 PM | Comments (13) | TrackBack

November 17, 2005

New Microsoft Threat Modeling Slidedeck

As promised, Dan has sent me his slidedeck that he presented at the Westcoast Security Forum, which you can now download from my blog here.

I'd like to point out a few interesting things I learned from it. First off... I LOVE the fact he uses the Common Criteria for Information Technology Security Evaluation to show the security concept and relationships on "why" threat modeling matters. I thought it was a smart idea to show that this isn't a "Microsoft" thing.

I also was surprised to learn that in the updated threat modeling process at Microsoft, the step to build threat trees is now considered an optional step. The main reason seems to be that you REALLY need to have a security guru who understands the system and the threats to it to properly define the tree in any sort of useful depth. (To those of my students that I have taught secure coding to, this is what I call attack trees). It is interesting that Microsoft found that in practice, this should be an optional step. I can't say I disagree here. If I had to sacrifice anything during a threat model for the sake of time, it would be the attack tree that wouldn't get done. So I guess I shouldn't be all that surprised.

I also liked the changes in the data flow diagrams (DFD). Now these aren't actually changes, as they are defined in the Threat Modeling book and Writing Secure Code (Second Edition). What is different is the clarity of showing privilege boundaries and defining that generally going two levels deep in a DFD is far enough. I also really appreciated how Dan showed implementation examples of the different components in a DFD, and more importantly... common DFD "bugs" and how to fix them. Actually... I think that was critical to his presentation, as I have seen myself having to correct those same exact things when further reviewing my own threat models.

Then Dan did something that actually impressed me. As information security professionals, one of the common realisms we understand is that of the three pillars of information security that fall under the CIA triad. Without them, you cannot build a secure system. For those that don't know, the CIA triad is composed of the principles of :

  1. Confidentiality - the ability to hide information from those people unauthorised to view it.
  2. Integrity - the ability to ensure that data is an accurate and unchanged representation of the original secure information.
  3. Availability - the ability to ensure that the information concerned is readily accessible to authorised viewers at all times.

Why I was impressed was the fact Dan showed how STRIDE categories fit against the CIA triad. Very few developers get that. Very few even consider their software from an infosec point of view, which is probably why I have a job. :) This, to me, shows a maturing in how Microsoft is viewing threat modeling and secure software development as a whole. When I hear people saying things like spoofing and tampering are anti-I in the CIA triad, I blush. They get it! They really do.

I also appreciated the fact Dan presented a very simple chart showing Threat Types by Asset Type. One of the things I have found over time is the fact that after you do a few DFDs, you notice common patterns and solutions to the same problems. His chart broke that down quickly, and in a simple manner. I might actually have to steal that one for my presentations. Hope that's ok Dan. :)

Anyways, there is a lot for you to see and read. Feel free to download his slidedeck and check it out.

Posted by SilverStr at 08:32 AM | Comments (1) | TrackBack

All you ever wanted to know about Port Knocking

Recently I have received a few emails from people who wanted a copy of my Cerberus program, which allows people to fire covert icmp packets to execute code on remote hosts that you control. I have used this tool for years to secretly open firewall ports, launch nmap and nessus scans and even to email certain information I need while in the field.

Cerberus is not a publically available tool, and hasn't been updated for Unix in years. I keep thinking about porting it to Windows, but just haven't had the time. Last year I started talking about the tool since these days there are tonnes of port knockers that can do similar things. I even released a slidedeck about it called "Introduction to Cerberus: Port knocking with covert packets to secretly open your firewall" which I have presented at a few security conferences and user groups.

Anyways, as I started asking people how they found out about Cerberus, I was told that information about it was up on portknocking.org. Huh? Never heard of it. So I checked it out this morning. Wow. If you want lots of information on all the different types of port knockers out there, you owe it to yourself to check this out. You may be pleasantly surprised how much depth of information on the topic is indexed there. And sure enough... there are links to my slidedeck and information about Cerberus.

Many thanks to Martin Krzywinski for running such a site, and indexing Cerberus up there. The guy lives and runs this in my backyard and I didn't even know it. Great job.

Posted by SilverStr at 08:18 AM | Comments (0) | TrackBack

November 15, 2005

Secure Software Programming: DREAD is Dead.

So yesterday at the Westcoast Security Forum I sat in on Dan Seller's latest threat modeling presentation by Microsoft. It has been interesting seeing the evolution of the process over time:

  • In Michael Howard's first edition of Writing Secure Code (semi review back in 2002 here), when introducing threat modeling DREAD analysis wasn't considered
  • In Michael Howard's second edition of Wiriting Secure Code, DREAD analysis was the defacto standard method of performing the analysis. I wasn't a fan of this, as I prefer using the standard infosec risk formula of:

    risk = Probability(chance) * Damage Potential (damage)

  • In Frank Swiderski's Threat Modeling book (my review here) Microsoft went one step further and got deeper into DREAD. I started looking towards dread, screaming and kicking all the way.

Now.... DREAD is dead according to Dan. As I expected, DREAD was too subjective to be useful at Microsoft. Security minded individuals would rank everything extremely high... making most threats seem to be a 10. Most developers not focused on security would give threats low ratings... showing it to be a 0 or 1. Such polarity didn't make much sense, and they decided to drop DREAD.

So what are they now using instead? They are using the Microsoft Security Response Center Security Bulletin Severity Rating System . Instead of having a rating system between 0 and 10 where most stuff is ranked as either a 1 or a 10 anyways, it is now broken down into 1 of 4 categories:

  1. Critical: A vulnerability whose exploitation could allow the propagation of an Internet worm without user action.
  2. Important: A vulnerability whose exploitation could result in compromise of the confidentiality, integrity, or availability of users data, or of the integrity or availability of processing resources.
  3. Moderate: Exploitability is mitigated to a significant degree by factors such as default configuration, auditing, or difficulty of exploitation.
  4. Low: A vulnerability whose exploitation is extremely difficult, or whose impact is minimal.

This seems logical if you consider the progression of Microsoft as it relates to security updates. Prioritizing threats in this manner gets to the heart of what HAS to get done first. It matches well with the Security Response Center. And overall, its much less subjective. Tie this to the fact you can match up STRIDE analysis against this rating system, and you find this may indeed work better than DREAD.

DREAD is dead.

P.S. In the next day or so I hope to be able to post a few more items of interest that I learned from Dan, including a copy of his slidedeck. There are some changes to the DFD process now to consider, and I would like to ensure you guys get your hands on a copy of the slidedeck when he returns to Microsoft. I'll keep you posted.

Posted by SilverStr at 09:19 AM | Comments (2) | TrackBack

November 12, 2005

Blog Maintenance for the next hour or so

My blog will be up and down for the next couple of hours while I upgrade to the latest version of Moveable Type and clean up some stuff. With over 90,000 trackback spams in the last 3 months... its time to find a different way to handle this.

My apologies to any RSS reader that croaks :)

Posted by SilverStr at 05:55 PM | Comments (0) | TrackBack

November 11, 2005

The horror of calling Microsoft PSS.. resulting in asking for help of the blog community

Man am I vexed. I just burnt through a Microsoft PSS case (that's the premium support services you pay money for) trying to get some help with a BadImageFormatException I was having... only to be brushed off 4 days later with a statement to the effect of:

"... we can't help you because it includes an open source component"

Great. WHY THE HECK DIDN'T YOU TELL ME THIS 4 DAYS AGO WHEN I SENT YOU A SAMPLE PROJECT WITH CODE SHOWING THE ISSUE... before I spent all the time trying to communicate with you about the problem I am having. And to boot, it was really difficult speaking with the person, who I can only assume works in Banglore, as her extremely strong accent made me feel as if she was being confrontational the entire time.

To be fair, the exception IS occuring in the open source component... but at the result of including a Microsoft IE OCX control. I can add other user controls without incident. I can't understand what the problem is and was going to MS PSS for help, under direction by a Canadian Microsoft employee who wasn't able to help me. Microsoft doesn't offer .NET libraries for MMC 2.0 in .NET 1.1 (which will continue to reside on SBS for some time yet), and I was forced to use the open source components.

At this point, it is clear that Microsoft isn't able to help me on this issue. The PSS person said something about her manager telling her to delete all of my sample code as the open source piece hasn't been passed through "legal". So I gather I can't talk to any of the MS friends I have either about this. *sigh*

Hopefully someone out there in blog land can help me out. I have a sample project that shows the exception. If you are running VS.NET 2003 with C++ and C# installed you should be able to unzip this into the c:\code dir and build right away.

The issue I am having is trying to add the Microsoft IE control (AxWebBrowser) to a FormsNode acting as a RootNode in the open source MMCLib2 library. When you click on the snapin, it causes a "BadImageFormatException" for the IE control.

Steps to Repo the Exception:

  1. Run in Debug (F5). An empty MMC console should pop up.
  2. Add a new Snapin (CTRL+M)
  3. Press the "Add" button
  4. Scroll to bottom of Add Standalone Snap-in dialog and select "Test Dashboard"
  5. Hit "Add Button"
  6. Hit "Close" Button to close "Add standalone Snap-in" dialog
  7. Hit "OK" to close "Add/Remove Snap-in" dialog
  8. Click on the "Test Dashboard" snapin now in the tree. Notice that the snapin loads correctly, and there is a blank groupbox.
  9. Close the MMC console. Do not save settings when prompted.
  10. Uncomment line 40 in TestDashboardControl.cs
  11. Rebuild Solution. (CTRL+Shift+B)
  12. Repeat steps 1 through 7.
  13. Click on the "Test Dashboard" snapin now in the tree. Notice the exception occuring on line 117 of FormNode.cs. This is the issue to be resolved.

If there is ANYONE out there that has any ideas on what the problem is, I would love the help. I tried getting help from both the open source community and Microsoft, and I am now starting to feel like I am being orphaned/alienated for using the bloody library. This is just nuts.

UPDATE - Nov 12th 6:30pm:Many thanks to Aaron Robinson for emailing me a workaround to my issue yesterday. It looks like my use of aximp has created a somewhat fubar set of interop libs for the ShDocVw.dll and AxShDocVw.dll. He gave me a different set of command line parameters that seems to create images that the system is willing to accept. I have NO IDEA why those same "corrupted" images work fine on stand alone winforms, but not as controls in a snapin base with the MMCLIb2 stuff. Anyways, if you are interested, here is the work around:

  • Remove the AxShDocVw and ShDocVw references. Toast the DLLs in the project dir.
  • Add a reference directly to c:\windows\system32\shdocvw.dll. This created the interop assembly in the /bin/Debug folder, interop.shdocvw.dll
  • Open up a cmd window and cd into the project folder
  • Run aximp with the following cmd line args:

    aximp c:\windows\system32\shdocvw.dll /rcw:c:\code\TestDashboard\bin\Debug\interop.shdocvw.dll

    This produces AxShDocVw.dll in the project folder.
  • Add a reference to the new AxShDocVw.dll
  • Recompile with line 40 uncommented.

Walla. That works and the browser control comes up as expected. Now to figure out WHY that works, and to determine what dlls I should be distributing in the installer. I am not sure if interop.shdocvw.dll has to be distributed... and if I am allowed to legally.

Thanks again to Aaron for the help!

Posted by SilverStr at 01:14 PM | Comments (0) | TrackBack

Microsoft, quit reinventing role based security in your products!

Last night at the Vancouver SBS UG we had a presention by Scott Colson, CRM-MVP, on the upcoming MS CRM 3.0. I was really looking forward to this, since my last experience in learning about CRM 3.0 was a dismal failure.

This time, it was much better. I now understand the real vs perceived benefits of Microsoft's CRM product, especially with its interaction with the Windows stack. I think the product has a ways to go to be as clean as I would like to see it... but the 3.0 product is way better than what I saw on the 1.2 demo I was given earlier last year.

But that's not the point of this post. I cringed as Scott discussed security of CRM 3.0. Not that the security is weak... but that Microsoft is re-inventing the wheel again with the way it associates security roles in it's products. I complained about this before when I found the same problem with Sharepoint on SBS.

Now Microsoft may have some reasons for doing this, so its hard to be critical here. Especially since CRM is licensed by named user. But their approach seems to go against good infosec principles and practices as it relates to single-point role based security management. Let me explain.

One of the strengths of AD is the fact that you can associate roles through Security Groups. As an example, on our SBS AD at the office our security model has us creating ACE's (Access Control Entries such as users and groups) based upon job function and then using very specific ACL's(Access Control Lists) to lock them down allowing users to have the ability to perform only their job function and nothing more. Least privilege at it's finest.

It is easy to build a "sales" role and a "marketing" role with AD Security Groups and assign the users to those groups as required. What I EXPECTED in Microsoft's CRM package was the ability to respect that associative authorization and allow me to carry that on through CRM. Unfortunatley, that is not what happens.

Microsoft started it right, by having the user authenticate to the AD. So CRM knows user "bob" is currently logged on from that machine, but then it dies there on the AD security role side of things. It AUTHENTICATES user "bob", but doesn't allow AD to AUTHORIZE what he is allowed to do. At this point, CRM looks at its own built-in security roles to determine what privileges "bob" is authorized to have. The association is to the user via the role he is given in CRM, and NOT in AD. It makes sense on it's own, and Microsoft did a great job with the "Security Roles" security matrix for assigning privileges within CRM. But in my mind, the approach is flawed. And let me tell you why.

When using the Windows stack with AD for security, the assignment and control of security privileges should remain in AD. CRM, and every other Microsoft product for that matter, should respect that so changes in AD are immediately reflected throughout the entire security infrastructure of the organization, which should impact privilege levels in the software using that associative authorization. This has HUGE impact whenever a user's job function changes. If I remove user "bob" from the sales role in AD, it should IMMEDIATELY be reflected in CRM that bob is not allowed access to the sales role. Currently, you would have to make the change in AD, then go into CRM and also change it there. However, if the "Security Roles" in CRM were associated to Security Groups instead of Users, this would get around this entire thing.

Does that make sense? You can still build "Security Roles" in CRM so that you can assign privileges to the different Entities within the software. But there should be an option so that those Security Roles can be associated to the "Security Groups" in AD rather than specific users, so that security changes that impact authorization privileges in AD immediately are reflected in any software taking advantage of AD.

Imagine this scenario. User "alice" is in a junior position as a front line salesperson, and her security role in AD has her in the Security Group "Sales". In CRM, this gives her the ability to add new events for leads and hot prospects but does not allow her to query other "Sales" associates to see how they are doing... a privilege that is only afforded to the "SalesManager" role which user "bob" currently has. At the end of this month "bob" is leaving the organization and "alice" will be promoted to his position. All that would have to happen is disable "bob's" account in AD which would prevent him access to the system. And this works now... since login auth is done with AD. But now comes the nice part. By removing "alice" from the "Sales" role and placing her in the "SalesManager" role, she would immediately have the required permissions to do her job. Without anyone having to go into CRM and making a single change. And it would immediately give her access to any other privileged resource for that role within the domain in the same manner.

I don't understand WHY Microsoft feels it has to reinvent how the security model works with each product. The same scenario exists with Sharepoint. It would be much easier to assign roles through Security Groups in AD instead of assigning users to built in roles in there as well.

To be fair, there could be design decisions I don't understand here. I haven't seen a threat model into why they separated authentication and authorization like this. But it seems silly to me to duplicate efforts and require the workflow for security changes to have to take place in two separate places. It is much too easy to forget the CRM step, which could open up the business to unnecessary risk. And it becomes MUCH easier to manage when you have 10's or 100's of users that have to use the software.

Use Active Directory. That is what its for. And Microsoft, you OWN it. You should be leading us here... giving us the option to have the entire Windows stack working together through a single role-based security model when using your software.

Posted by SilverStr at 11:30 AM | Comments (3) | TrackBack

We Remember

.-.. . ... - .-- . ..-. --- .-. --. . -



Lest we forget

To my fallen brothers.... "Chimo".

Posted by SilverStr at 09:48 AM | Comments (0) | TrackBack

November 09, 2005

HOWTO: Get around Window's Mobile Space in Password Bug

If you are a user of a Windows Smartphone, listen up. I found an interesting issue with my Audiovox SMT5600 running SMartphone 2003 today and I thought I would bring it to your attention.

If you happen to change your active directory password and have a space in it and you try to sync to the Exchange server, it will fail. Well, actually it fails because there is a new password. The problem lies in the fact that you can't CHANGE the password on the Smartphone because the password field doesn't recognize the # key as a space! And if you hold the # down to load the symbol table... space isn't a valid symbol!

This irritated me this morning, as I was trying to sync this over GPRS while at a breakfast meeting. Was a no go. So when I got back into my office, I went on the hunt for a fix.

Ends up the fix is to not set the password up on the phone, but instead do so on the ActiveSync desktop application. Here is the steps to do that:

  1. Start ActiveSync (or double click on it in the tray if its already running)
  2. Click on the Options toolbar item
  3. On the Sync Options tab click the Configure button under Server
  4. Click the Connection button
  5. Set the username, password and domain there
  6. Hit Ok to dismiss all the dialog boxes, and push the changes to the smartphone (assuming its connected with the USB cradle)

At this point you should be ok and you should be able to sync again. Kinda silly if you ask me that spaces are not recognized in the password field on Windows Mobile. *sigh*

Serves me right for making a complex password. *lol*

Posted by SilverStr at 09:42 AM | Comments (0) | TrackBack

November 07, 2005

Aardvark'd: 12 Weeks With Geeks

As most readers know, I am a fan of Joel on Software. My company typically runs on a 12/12 from the Joel Test. (We have our moments).

Anyways, if you are a fan of Joel you know about Project Aardvark. If you don't, it was a summer intern project that resulted in Fogcreek's CoPilot.

One of the other things that Joel did was a call for a documentary filmmaker. That's right... they filmed the whole thing!

This should offer an exciting look into software development at Joel's firm. And you can now pre-order the DVD. I would have posted sooner... but I wanted to make sure I got my order in before you... there is only a limited amount that will ship when it's available afterall. :)

For the software dev geeks around here... you can consider yourself all invited to a private screening when it comes in. For the rest of you... run now and go pre-order yours!

Posted by SilverStr at 12:32 PM | Comments (1) | TrackBack