IIS 7.5 and 2 Level Auth

We use a large vendor application at work.  We host all the infrastructure for the application inside the firewall, so there is absolutely no access from the Internet.

In IIS6 we configured 2 level authentication – NTLM and Forms Auth.  The vendor requires Forms Auth for the application.  Given the importance of this application and sensitive nature of the data; I also enabled NTLM and secured the site to only people in our division (about 450 people).  There are about 150 logins in the application meaning that 300 people have access to the site; even though they will not be able to actually see any screens until they login.

Through a series of discussions with different audiences; it was decided that there is still enough of a risk of those 300 people being infected with something that takes advantage of cross site scripting or other classic vulnerabilities.  So I further locked down the site using a more restrictive group.  While I feel like we are being a little paranoid about, I capitulated.

Enter IIS7…


Our standard for servers is Windows 2008r2 so we are on IIS7.5.  Doing this same 2 level authentication on IIS7.5 did not work.  Why?  Well because of the integrated pipeline…it simply cannot not do both at the “same time”.  One has to come first.  In IIS 6 NTLM always came first since that was done my IIS and then Forms Auth since that was done by ASP.NET.

There are a couple of hacks out there that describe how to work around this.  One of which I found posted here by Mike Volodarsky (formally of the IIS team).  Here he talks about a way to make this work by splitting up the authentication and forcing one to happen before the other.  I was up until well after midnight last night trying to consider how I would make this work given that the application is a vendor application and I don’t have the source code.  Not to mention that everything is precompiled, signed and obsuficated.  All of which add up to…this would be really hard to hack.

Finally, after a bit of chin rubbing…I came to the conclusion that the integrated pipeline may not be the problem at all.  Why do I even still need NTLM?  I mean if the only way for someone to access a web page on the site is to have a valid Forms Auth token then do I really need to force them to also have an NTLM token?  I went to bed content that I just need to leave NTLM behind in this case.

Now I just need to convince everyone that was pushing the original requirement for 2 level authentication that I don’t need it anymore.  Being that they don’t really understand the technology very well – that could be a challenge.  Since the way we got here was through a vulnerability scan of the web site in the first place – perhaps requesting another one will demonstrate my point and I won’t have to make them understand the why.

I will post an update on the outcome.

TFS Recovering Shelveset for Invalid User

One of the developers on the team was getting a TFS error (below) yesterday while trying to access a shelveset for a developer who left the account a couple of months ago.  Turns out he needed some of the code on the shelveset.

 TF50605: There was an error looking up the SID for TC30014  

Note to self…shelvesets are probably not the way to do this, thinking that a branch would better construct.  I want to encourage them to be doing more of these anyway.

The problem is because TFS is going to lookup this user in the Active Directory and the user does not exist anymore.  I can see the shelvesets in TFS using either TFS itself or TFS sidekicks.  I included a screen print of all the active shelvesets for this user in sidekicks.


So TFS must be doing some lookup on the user and when it does not find it – errors out.  Not knowing how to solve this, I put couple of searches later (“tf50605 -vssconverter”) out there and found this article.  While not directly what I needed it was enough information to crank up SQL Server Management Studio and start poking around a bit.  So I started with the OwnerId for user that was removed and for the user who was trying to get the code.

 SELECT IdentityId FROM tbl_Identity WHERE (DisplayName LIKE 'ad2\TC30014')  
 SELECT IdentityId FROM tbl_Identity WHERE (DisplayName LIKE 'ad2\RMxxxxx')  

Once I had this I plugged the deleted user’s id into the following query to get all the workspaces.

 SELECT TOP 1000 [WorkspaceId]  
  FROM [TfsVersionControl].[dbo].[tbl_Workspace]  
  WHERE OwnerId = 276  

This showed me a bunch of workspaces.  What I noticed is that evidently the shelvesets and workspaces are used in a similar model based on the type.  So a little bit of infering and playing in TFS and it looks like if I hack this table, I can reassign all the shelvesets to a valid user (which is sort of the spirit of the article above).

Leaving out some of the details, I ended up with the following query that reassigns the orphaned shelvesets (type=1) from one owner to the other.  Since the WorkspaceName is part of the primary key (and relatively short), I changed the name so that the new owner could distinguish between his shelvesets and those that were reassigned.

  UPDATE [TfsVersionControl].[dbo].[tbl_Workspace]  
  SET OwnerId = 123,  
    WorkspaceName = RIGHT(WorkspaceName + '-Reassigned',64)  
  WHERE OwnerID = 276   
    AND Type = 1  

Looking back at TFS Sidekicks (I verified it first in SSMS – wink) I could see that the more recent shelvesets had indeed been reassigned.  Sucess!!


Now granted, we are on a relatively old version of TFS; so this hack may already be obsolete. But I wanted to put it out here just in case.

WordPress and Word

Microsoft Word has a feature to use Word to compose and publish a blog entry. I have used this periodically and have had mixed feelings about it. Now that I am hosting my own blog using WordPress I wanted to test this feature out again. How does it work with formatting different things and how well does the overall look and feel match the rest of the blog?

Here is some code…

static bool RenameFile(FileInfo fi, string newFullFilename)
        Console.WriteLine(“New={0}”, newFullFilename);
   catch (Exception ex)
       Console.WriteLine(“Error {0} renaming {1}”, ex.Message, newFullFilename);
        return false;
   return true;


Here is a picture…

I notice that it does not do multi column or other more advanced formatting normally available in Word. Maybe I will give this a shot since it does give you the robust spelling/grammar checking of Word.

PS.  I had to go into this post from the WordPress editor and clean up the code section.  The different way of single spacing something using <p> vs <br> is the issue.  Every line of code is a <p> when in fact I want it to end with <br>.  Oh well.  Not as good as I hoped.

I took the code above and plugged it into the code formatter I previously blogged about here.  It looks like the following, which in preview mode looks pretty good.

 static bool RenameFile(FileInfo fi, string newFullFilename)  
      Console.WriteLine(“New={0}”, newFullFilename);  
   catch (Exception ex)  
     Console.WriteLine(“Error {0} renaming {1}”, ex.Message, newFullFilename);  
     return false;  
   return true;  


First WordPress Entry

I have been doing a little more blogging lately and have been growing more frustrated with Blogger each time.  Not that it’s that bad, but it’s not that great either.  I have had my own domain sitting dormant for some time now.  I used to use this as a place out on the Internet where I could test my code “in the real world”.

My wife told me about WordPress a while back and I asked her again about it today.  So I spent the day getting it loaded up, configured, copying over the content and making some customizations.

Overall I like the product.  Especially given the price.

Debugging 101

We had a very nasty issue yesterday (into today) at work.  It involved an issue we saw once before, last year, and never figured out what the issue was.  Well it was back yesterday and it reminded me of some of the important aspects of problem resolution; closely related to debugging skills.

The specifics of this issue are not really important except in that the infrastructure of the system has multiple physical tiers and that it is a vendor application hosted internally.  Multi-tier applications have a higher level of complexity that comes from the fact that there is way more code running that just the application itself (VIPs, routers, firewalls, communication stacks, different platforms, etc).  Vendor applications can just be a pain since you don’t know the internals of what is going on; which means you are making assumptions (aka educated guesses) sometimes. 

Below are a couple of my favorite principles when debugging a issue.

Write Everything Down

  • This not a time to test your powerful memory skills.  You will get tired and you will forget.  As you try things you will find things that work and things that don’t.  If you are lucky you will find the fix quickly. If you are not (it took us 20+ hours this time) then you will have many things that did not work.  These are very important and you are going to probably have lots of them
  • You are going to have people rotating in and out of the virtual team that are going to be a distraction if you have to bring each of them up to speed on what you have already tried.
  • You will resolve this issue at some point and want to restore some of the things you changed.  Write down the current state of the system – “which knobs have been turned”.
  • You may look back on the path you followed to resolution and be able to identify ways to improve the system overall.  If you write this stuff down you will appreciated a couple days later when your life returns to normal.
  • Names and phone numbers.  If you have an diversified organization, as we do, you are going to need/get lots of people coming and going.  Many of these folks will have knowledge of or authorization to change things that you do not. Once they are members of the virtual team you want to keep them since they already have a context which in and of itself is valuable.

Change One Thing a Time
Thankfully this is something that I learned very early on and have tried to live by.  I use the word tried, because I have succumbed to temptation to do otherwise and often lived to regret it.  My implementation of this principle is the following…

  1. Draw a conclusion about what you think is wrong.  In other words don’t go shooting in the dark.  If you don’t know what to do next then stop.  It doesn’t mean that you won’t be doing something soon, but don’t go trying things without first knowing what you think may be the issue may be.  At some point your conclusion will either be correct and you have “solved” the issue or it won’t be and you have eliminated another thing that is NOT the issue.  More on what does not work later.
  2. Evaluate your options for correcting.  Write them down; you may want to try all of them.  This is a good time for brainstorming.  You may want to bring some others into the virtual team for a short time to help out here.  Treat them as consultants (see roles and distractions below) and don’t let them linger too long unless they are able to fit in.
  3. Decide your approach for correction.  One person owns the decision as to what the next course of action is (see roles).  There a million ways of coming to a decision that I will not go into – the key here is that you choose one and let everyone know what the decision is.
  4. Plan your implementation.  This is not as heavy as it may sound.  You don’t want to spend too much time here; not that it is a waste, but at some point in this discussion you will get to a point of diminishing returns.
    1. Identify what you believe the new outcome will be.  
    2. How will you know if it worked?  
    3. Do you know how to roll back the changes you made?  
    4. How are you going to test your change? 
    5. What can go wrong?
    6. What may other outcomes be and what do they tell you?
    7. All things to consider BEFORE you actually implement the change.  I feel another who blog topic just on this point.  If you don’t understand why all these are important things to consider; then I need way more space than I want to spend here to show you why…so I won’t.  Trust me.
  5. Implement the change.  
    1. Identify who is going to do what and make sure they are clear on what they are changing.  Hopefully they are an expert and not learning as they go. 
    2. Pair programming was never more helpful than now.  Work together to ensure accuracy.   You will be getting tired and mistakes will happen.  Put everyone to use here and let them help by watching for gross errors.  Don’t be afraid to show someone how this works; you don’t want the fireworks effect where every time someone types something the entire rooms gasps.  This take patience, it hard to watch someone else type and it is equally hard to be watched.
  6. Run your test.  This is what we have been working for.  Write down the result(s).  Did something totally unexpected happen?  What does this tell you?  What conclusions can you draw?
  7. Repeat.  Look this iteration of conclusions, options, predictions and outcomes and decide what you going to do next.  Do you start over at step 1?  Or someplace in between here and there?
  8. Don’t forget to reset.  You need to choose to restore/reset the environment.  Record what you do and make sure everyone knows the current state. 

Clear Roles and Responsibilities
Important to consider and feels kind of bland in some ways.  Not to mention that this is probably a whole entry in itself.  A couple of things I wanted to record now are..

  1. Who is running the show? Make sure they can delegate.  Make sure everyone respects the decision when it is made.
  2. Who is communicating?  Boy this is a big topic in itself…
    1. What are the different audiences?
    2. Who make the decision to communicate?
    3. Is this the same/related as escalation?
    4. Frequency?  Email?  Phone?  etc.
    5. Blah, blah, blah…
  3. Who is a spectator?  Make sure they know they are.

A good example of poor role definition happened to us during this most recent incident.   The system came back up and an excited member of the team sent an email to the entire customer base that the system was available.  Whoa!  Yes the system did come back up but it was not ready for the business to start using it yet.  We still had not assessed why the system came back up and whether we thought our success was going to last.  What a pain if the system failed a couple minutes later.  Also, the system was still in a debug mode.  We had lots of logs turned on and test settings configured that need to be changed in order to get the system back to it’s production state.  Luckily the users figured out that someone was not quite right and let us know.  We recovered before anything really bad happened but it could have gone horribly wrong.

Did I follow my own principles this time?  I tried.  But sometimes when you have lots of people involved it is just not possible.  People get anxious and/or want to contribute.  They have good intentions but in the end it muddies the whole thing.

In our case we had a couple people off in the corner of the room trying things.  One with elevated privileges and the other with a little bit of knowledge but not a core member of the team.  They started hacking around and without anyone else knowing.  They changed a bunch of things on a test server and found that production environment was back up.  The likelihood that they actually did anything is very low, but now we don’t know since we don’t know what they did or the state of the environment before they did it.  Chaos.  Now we are left with a nagging question.  This has left me with a couple new principles.  It is not very well thought out at this point but I wanted to get it down now before I forgot.

Eliminate Distractions
Distractions come in many forms and they can slow you down or just plain hurt.

  1. Don’t have anyone involved that does not to need to be.  Excitement tends to draw crowds, so you need to know when to put up the yellow tape.  I don’t want to be militant about this because there are some people out there that are comfortable being in a peripheral role (see above) and know when to contribute and when to stay out of the way.
  2. Get to a war room or isolated area that makes all the other principles easier.  
    1. We have several big rooms with 80″ smart board/displays and lots of whiteboard space; which can aid in the documentation.  
    2. They also have table mounted speaker and lots of ceiling speakers for good audio because you will likely have a distributed team and communication with all them is going to be hard enough – forget it if you cannot hear one another.
    3. Getting away from the crowds can keep the crowds away.
  3. Don’t forget the creature comforts; food, drink, restrooms.  These are obvious things that the team will need during a incident; but they can also be distractions.  If the restrooms are way far away; then it just hurts.  If people are hungry they can be distracted.  You also don’t want everyone fending for themselves if you don’t have to.  I kept bringing in food for the team
  4. Get sleep when you need it.  No heroes. If you are getting punchy then are probably going to become a distraction for the entire team.  There are all kinds of studies out there that related being tired to being drunk – don’t debug drunk.  You will swerve over the yellow line.

Understand Vendors in Scope
Make sure you understand the vendor products or services in your application before you have an issue.  What support arrangement do you have with them?  Is it 24×7?  How do you reach them?  Make sure the contact information is current.  What is their engagement / escalation model?  Do they know your environment?  If not how are going to educate them?  Are you sure that the sharing technology (webex, etc) hey use is compatible inside your firewall?  Are you current/do they support the version you are on?

More to say here, but I am running out of steam.  I may revisit this at a later time.

Deep breadth.  I am down here at the bottom of this long entry and liking the brain dump.  Not sure how coherent it all is but it feels pretty good.  Let me know if you find any of this helpful.

As I was wrapping this up I found this interesting article that I though was worth linking to here.  I am constantly amazed at how many topics there are “out there”.

Continuous Integration meet Mr Sarbanes and Mr Oxely

I have been thinking a lot about Continuous Integration, DevOps and related topics where we are building more and more tools to help take the variability out of building systems.  By building systems I am talking about the phase of the SDLC where we begin begin thinking about writing the actual code and the time we deploy that code to production.  When I think of the lifecycle of an application this loop stands out to me as one that gets executed many, many (!!) times.  So it makes sense to 1. have this as efficient as possible and 2. increase the accuracy as much as possible.

We are doing this is couple different ways which I am not sure I can go into much detail around because of company policies.  But suffice it to say that we use a third party build/test tool for our code that checks everything out of source control, modifies configuration, builds, runs unit tests and deploys the code (and documentation).

This year we have a new requirement that has me rubbing my chin quite a bit.  The requirement from the auditors is that (according to SOX) those people in the development role cannot have write access to production bits and those in the deployment role cannot have access to the development bits.  When I say bits I am talking at the run time/deployabe level (not the source code).  The rationale behind this (so I am told) is that this prevents anyone from introducing changes into the process where code is promoted from lesser test environments to production.  Reserving any commentary about how I feel about this policy it is something that we are being required to do.  And given the sensitivity regulators have around the investment management industry and the enterprise approach the parent company takes – it doesn’t really matter how I feel.

My anti-strategy is not use humans separation to implement this.  Having a team just to press a button that I asked them to press seems like a waste.  Sure nothing should go wrong if we are doing this right, but something WILL go wrong.  Can you just imagine the conversation between the deployer and the developer when this happens….

Deployer: “The package failed”
Developer: “What was the error”
Deployer: “Some really big negative number”
Developer: “Can you send me the logs?”
Deployer: “Where are they?”
Developer: “Try this new package”
Deployer: “I don’t see approval from your manager for this”
Developer: “It is 2am my manager is sleeping”

The other issue I see with the human solution to this is that of production support.  When in a time critical break/fix scenario – the last thing you want to have to do is get through a process that does not understand your systems, your business and therefore the context in which they are performing.  Sure you have separated the roles, but at what cost?

My strategy is that by leveraging our continuous integration tool I should be able to accomplish much of this since the deployer is a system/application itself and as long as I can show accountability then we should be all set.  I will admit I am getting some initial resistance to this; but I hoping through a partnership with the auditors we can figure out a reasonable way to do this.

One of the interesting topics around this new requirement is the DBAs – since they inherently break the developer/deployer separation by having access to everything in every environment.  The database seems to me like the perfect place to be doing something that is “not on the up and up”.  Interestingly, the DBAs are considered out of scope for requirement.  Biting my tongue.  Is this implying that developers are inherently less trust worthy than a DBA.  Or is that no DBA is savvy enough to possibly change a sproc (aka code) to do something devious.  Maybe is it that the DBA lobby is that much better than the developer lobby.


Recursive Yield Return

Was writing a recursive routine the other day and wondering what an implementation of this would look like should I convert it to use yield return.

Much to my consternation this was not as easy as I thought it would be.  It took me almost a week to get it working.  Not of constant time, of course, but elapsed time.  In my initial implementation I could not get my head wrapped around whether each yield return was going to bypass all the calls on the stack and return a result to the caller OR whether it was going to just pop one call context.  Turns out it is the latter, which greatly complicates the implementation.  Given that the implementation of this particular implementation was escaping me.

I was downstairs meditating last week, not thinking about anything in particular and it hit me.  Like a flash…I could see the implementation.  I ran upstairs and quickly wrote down the rough implementation.  I felt kind of like a musician when a riff for a song hits them in their sleep and they need to quickly write it down before they forget it.

I came back to the code after dinner and put the finishing touches on it.

The first method is the seed method implemented as an extension method on IEnumerable.  It in turn calls the the recursive method.

The implementation below is function that given a collection it will return you a collection of all the permutations (a collection of collections). 

For instance if you pass
    [ 1, 2, 3],
    [4, 5,  6],
    [7, 8,  9]

This routine will return you 3x3x3 (27) collections, each of which will contain 3 items.  Using the data above here are the first few collections returned…
   [1, 4, 7],
   [1, 4, 8],
   [1, 4, 9],
   [1, 5, 7],


public static IEnumerable> GetPerm(this IEnumerable> domain)
return GetPermRecur(domain);
public static IEnumerable> GetPermRecur(IEnumerable> domain)
var c = domain.Count();
var firstFromDomain = domain.First();
if (c == 1)
foreach (var item in firstFromDomain)
yield return new[] {item};
var domainWithoutFirst = domain.Skip(1);
var permSoFar = GetPermRecur(domainWithoutFirst);
foreach (var item in firstFromDomain)
foreach (var curCol in permSoFar)
var curPerm = new List() { item };
yield return curPerm;
Posted in c#

Code Formatter for Blogger

I bowed down at the alter of the all knowing oracle (aka search engine) and asked what I could use to format the code in my previous post.

The answer was this.
Copy the code into first text box.  Press a button.  Out comes the HTML.  Nice.
I turned off the last option (Alternate Background) to get ride of the alternating highlights.  Yuck.

Google Contacts Importer

Could not get a bunch of contacts to import into Google Contacts the way I wanted.  I had a CSV that I exported from Outlook but no matter what I did to the CSV header names, I did not get the phone numbers spiked out for the contact correctly.  Instead the phone numbers would all be put as text into the Notes of the contact.  Useless.

So I rolled up my sleeves and wrote the following code to import the CSV file.  I made some assumptions (because I could) about the order fields appear in the file and their format.  For instance, the name I have is in the “Last, First M.” format; so the code breaks this up.  Also, all the phone numbers are formatted correctly so no fixing required.

Another assumption is that all the contacts are added to the same group (see the constant at the top) which does not have to be the same as the company name in the CSV file.  Probably could have cleaned this up a bit, but it worked for what I needed.  

Hope this helps someone…

CSV Format

 UserName,Company,Department,Cell Phone,Home Phone,Work Phone,Mobile Phone 2  
"Adams, Gomez",My Co,Accounting,,(212) 555-4805,(212) 555-6748,

C# Code

 using System;  
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Diagnostics;
using Google.Contacts;
using Google.GData.Contacts;
using Google.GData.Client;
using Google.GData.Extensions;
using LumenWorks.Framework.IO.Csv; // http://www.codeproject.com/Articles/9258/A-Fast-CSV-Reader
namespace UploadContacts
class MyContact
public string FirstName { get; set; }
public string LastName { get; set; }
public string Company{ get; set; }
public string Department{ get; set; }
public string CellPhone{ get; set; }
public string HomePhone{ get; set; }
public string WorkPhone{ get; set; }
public string CellPhone2{ get; set; }
internal static class Program
private const string CompanyNameForGroup = "My Co";
private const string AppName = "Test";
private const string Username = "Me";
private const string Password = "blah";
private static ContactsService _cs;
private static void Main()
var himcoGroupId = LookupGroup(CompanyNameForGroup).Id;
var data = ReadCsv();
foreach (var contact in data)
var newEntry = CopyData(contact, himcoGroupId);
private static void AddContact(ContactEntry newEntry)
if (_cs == null)
_cs = new ContactsService(AppName);
_cs.setUserCredentials(Username, Password);
var displayName = string.Format("{0} {1}", newEntry.Name.GivenName, newEntry.Name.FamilyName);
var feedUri = new Uri(ContactsQuery.CreateContactsUri("default"));
_cs.Insert(feedUri, newEntry);
Console.WriteLine("Added {0}", displayName);
catch (Exception ex)
Console.WriteLine("Error {0} while adding {1}", ex.Message, displayName);
private static ContactEntry CopyData(MyContact contact, string himcoGroupId)
var fullName = string.Format("{0} {1}", contact.FirstName, contact.LastName);
var newEntry = new ContactEntry
Title = {Text = fullName},
Name = new Name {GivenName = contact.FirstName, FamilyName = contact.LastName}
newEntry.Categories.Add(new AtomCategory(CompanyName));
newEntry.Organizations.Add(new Organization
Department = contact.Department,
Name = contact.Company,
Primary = true,
Rel = ContactsRelationships.IsWork
if (!string.IsNullOrEmpty(contact.CellPhone))
newEntry.Phonenumbers.Add(new PhoneNumber(contact.CellPhone)
Primary = true,
Rel = ContactsRelationships.IsMobile
if (!string.IsNullOrEmpty(contact.CellPhone2))
newEntry.Phonenumbers.Add(new PhoneNumber(contact.CellPhone2)
Primary = false,
Rel = ContactsRelationships.IsOther
if (!string.IsNullOrEmpty(contact.HomePhone))
newEntry.Phonenumbers.Add(new PhoneNumber(contact.HomePhone)
Primary = false,
Rel = ContactsRelationships.IsHome
if (!string.IsNullOrEmpty(contact.WorkPhone))
newEntry.Phonenumbers.Add(new PhoneNumber(contact.WorkPhone)
Primary = false,
Rel = ContactsRelationships.IsWork
newEntry.GroupMembership.Add(new GroupMembership {HRef = himcoGroupId});
return newEntry;
public static Group LookupGroup(string name)
var rs = new RequestSettings(AppName, Username, Password);
var cr = new ContactsRequest(rs);
var feed = cr.GetGroups();
var retVal = feed.Entries.Where(i => i.Title == name);
return retVal.First();
public static IEnumerable ReadCsv()
var retList = new List();
using (var csv = new CsvReader(new StreamReader(@"C:\Users\Curtis1\Dropbox\Code\UploadContacts\UploadContacts\PhoneNumbers.csv"), true))
while (csv.ReadNextRecord())
string fname;
string lname;
var wholeName = csv[0];
var nameParts = wholeName.Split(new [] { ',' });
if (nameParts.Length >= 2)
fname = nameParts[1];
lname = nameParts[0];
nameParts = wholeName.Split(new [] {' '});
fname = nameParts[0];
lname = nameParts[1];
if (fname.Length == 0 || lname.Length == 0)
var newContact = new MyContact
FirstName = fname,
LastName = lname,
Company = csv[1],
Department = csv[2],
CellPhone = csv[3],
HomePhone = csv[4],
WorkPhone = csv[5],
CellPhone2 = csv[6]
return retList;