Nasty Security Issue

Yesterday a colleague asked me to help with a security issue he was having.  He is someone who I consider to be a good developer; but does not have a good understanding of Windows security beyond the basics…Right click on folder -> Properties -> Security.

The problem manifested as this developer (only) loosing access  to a NAS share – “it used to work”.

First I confirmed that he was in the correct Active Directory (AD) groups and those groups had sufficient permissions to the folders in question.  Check.  Conclusion – it should work.

Then I had the developer write a quick web application to make sure he was passing his full credential to web server.  Check.  Conclusion – not something that is globally impacting authentication/authorization.  Seems to be something in the “conversation” between this workstation and the file server.

Hmmm.  Next thing was to try this from another workstation.  He remoted into a old Windows XP workstation.  It worked!!  Conclusion – something going on with his workstation.

Now I know that on my laptop, when I log into Windows but I am not connected to the network – I am using a cached credential on my workstation.  I also know that part of that credential is the groups that I am a member of.  So it seems that what this particular workstation is passing to the file server does not have the right groups since the server is denying the request.  Seems like there is something dirty/broken in the cache.

I typed “windows delete cached credential” into my favorite search engine and got this post.  We followed the steps that Ashok spells out…

  1. From Control Panel\All Control Panel Items\User Accounts click the username To the left you will see Manage your credentials.  From that select the share name and remove.
  2. Delete using net use Start > Run > cmd > net use * /DELETE

…and shazam…it worked.

There is a first for everything – this was definitely a first.

Now onto a nasty SharePoint issue that I have been putting off.   Hey at least I may end up with another blog post.


I was recently helping some colleagues think about interviewing techniques. I like to see work product from people. So a while back I developed an assessment that we use. It has been a big help in evaluating candidates.

When I worked at Microsoft we took our interviewing very seriously. Certainly there are lots of stories (myths?) out there about the process. Some truer than others. I have a master list of good questions and suggestions I still use from that time. It is helpful to review periodically to just get into the right mindset.

Along these lines…I was recently turned onto InterviewZen. I like the idea of being able to watch a recording of someone creating a work product. We have a classic computer science sort of problem (weighted graph) that I use, but I don’t get to see the person working thru their thinking. I am going to try and adapt mine to this format.

Powershell Quickee

As this title rolled off my fingers it made me laugh a little.

But hey, isn’t everything in PowerShell a little quicker?  At least that is the intent.  Over the past few years I keep dabbling in PowerShell from time to time.  I have just enough experience to know what can reasonably be done in the tool – without over extending it.  <soapbox> It looks like there are some people who take this tool and use it to bang every nail they have.  But that is another blog entry. </soapbox>

Here is the script I crufted up today for showing me all the services that are on a server set to run at startup (aka Automatic) and yet are NOT running now.

I was using the get-service cmdlet but it did not seem to be return the StartMode and State properties from a remote server (works locally fine).  So I Googled up an alternative that uses WMI.

foreach ($ServerName in $ServerList)
Get-WmiObject Win32_Service -ComputerName $ServerName |
Where-Object { $_.StartMode -eq ‘Auto’ -and $_.State -ne ‘Running’ } |
Format-Table -AutoSize @(
@{ Expression = ‘State’; Width = 9 }
@{ Expression = ‘StartMode’; Width = 9 }

Keep It Simple (kiss) Revisited

I have a calendar from a vendor we use that has some of the classic coding and design principles – one for each month.  I was rubbing my chin staring at it this morning and I wanted to share what popped into my head…

While I am sure that the KISS principle has been written about (perhaps to death) I had another instance of this today as it applies to operations and infrastructure.

Quick background – I recently inherited an Operational group.  Operations is the clean up crew of development here.  While I understand the rationale of separating them, I think I like the idea of developers supporting their own code so that they better understand the impact of what they do.  What a great teaching tool – you want to not get up in the middle of the night – fix the code, do a better job in the first place, write a utility to help you out.  We have a bunch of applications that have been around for years and over time the developers who maintain many of these have moved on.  So today I asked the question of someone about two AD groups and what they are used for.  In both cases the answer was initially I don’t know – and later the answer became these are not used anymore.

Part of keeping systems simple is getting rid of the things that are not used anymore.  We have all these extraneous moving parts that we don’t need.  This just creates system bloat that should be easy to remove.

Granted you cannot get to everything – right now.  But this stuff has to get cleaned up over time.  Putting it into some sort of maintenance, wish list or Kaizen log seems like an easy thing to do.

All it takes is discipline.

Yiddish for IT Leaders

I have come to the conclusion that I need to know more yiddish.  Can I use it as a code to hide what I am really thinking?  Help bypass any email filters?  Just make me feel better.  Here is my arsenal.

bupkis – As in – you don’t know bupkis.

chutzpah – There is always one team member with too much of this.

glitch – Things are late again?  It must be another glitch.

kibitz – What we should call reviews.

klutz – You don’t want this and a programming to go together.

kvetch – What I do when I get home from work.

nudnik – In management speak these are the team members you manage out.

schmuck – What you call someone who changes something directly in production – first.

schtik – A little off topic, but I think of Benji Bronk on the radio on my way into work.

shpiel – My weekly team briefings have at least one of these.

yutz – Yiddish has lots of fun words for describing people that bum you out.

My phone is a fishing lure

Through a series of natural causes/events my phone was in the kayak, the kayak filled with water, the phone is dead and probably is better for catching fish or throwing at large game and rendering them unconscious.


This, of course, means that I have to buy another phone. Note that I did not say a new phone. It seems to me that the cell phone market sucks for anyone who, like me, is hard on a cell phone. There is no way for me to afford paying full retail because I am only months into my new contract.

This model is why I see so many screens with the spider cracked glass. Heck, someone at work actually had a piece of clear box tape over his screen to keep his phone usable to the end of the contract.

I cannot believe that the hardware for a phone actually costs that much. My hunch is that cell phone manufacturers are trying to recoup R&D costs for the phone and all the customizations that they put on top of the base OS. Apparently the competition is so tough that they feel they need to customize in order to distinguish themselves.

Wish someone would figure out a way to make a fast (obviously not going to be cutting edge) phone with base (aka free) version of Android. Rather than going after the performance or feature rich environment they go after the cost. I have seen some base Android phones and the OS is very usable. Negotiate with carriers to not put those annoying apps that kill the battery life. Figure out a different model, maybe even a Kindle with ads-like model.

Thinking about prevention…in this case it would have just taken a ziplock bag. Wish the kayak dude had a box of them for his clients. Otherwise, I guess the only case that may work for me is one of those industrial-double the size of your phone cases. I had an Otter case in the past and they were just too big to put in my pocket. Reminds me of the cucumber scene in Spinal Tap.

In the meantime, I am trying the following…

  • Scouring the used phone sites for a replacement.
  • Using article’s like this to try and save the old phone.

Agile Musing

I was at a LOMA meeting for work last week and was talking to a couple of other attendees about their Agile practices.


It reminded me of some early thinking I was doing back in the 90’s.  I was always drawn to doing things in what people now call an Agile way; but the first time I heard someone actually put words to what I was thinking was a presentation Jim McCarthy did at the 1995 Microsoft Global Summit in San Diego (I think).  I used to have the video on tape but it seems to be long lost at this point.  I found a couple YouTube excerpts but not the who thing.  He went on to write his book Dynamics of Software Development which elaborated on his 21 rules (I think the book has 40 something).  I remember liking his style…oh yeah…and the content was good too. 🙂

The Pragmatic Programmer is another book that put more meat on the bones of things that I was thinking or struggling with.  I consider this a timeless book unlike the Peter Norton’s Programming Guide to PC book (the pink shirt book) I recently came across in my archive.Norton

As I reminisce I am reminded about the Agile Manifesto which at first made me laugh but after that tickling feeling passed I took the simplicity and truth of it to heart.  I attached the image I keep on my desk here for posterity.


Not sure what the next Agile is going to be.  I do feel like there is something still missing that  I can’t quite put my finger on.  Hmmm.

Things that were once hard

I needed a quick utility to generate a script for setting the permissions on a massive (wide and deep) directory structure.  The analysis for this was not going well – I asked someone else to try it and they did not get the scope of the issue.  I needed to just get something running quickly so I wrote a quick .NET app to generate what I needed.  I was pleasantly surprised when went to actually grab the permissions for a given folder.

The last time I did something where I was scraping permissions off a folder I was in C++ and the Win32 API.  Yikes!  High impedence for something that I was just going to throw away.

The following is a snippet of code that wrote doing in .NET 4 (I don’t think this code would be any different in .NET 2).

 class FolderPermissions  
   public string Name { get; set; }  
   public IEnumerable<Acl> Acls { get; set; }  
 class Acl  
   public string Name { get; set; }  
   public FileSystemRights Permission { get; set; }  
 private static FolderPermissions GetFolderPermissions(string pFolderName)  
   AuthorizationRuleCollection perms;  
   perms = SafeCallToGetAccessRules(pFolderName);  
   var retAcls = new FolderPermissions { Name = pFolderName };  
   var acls = new List<Acl>();  
   foreach (FileSystemAccessRule perm in perms)  
     if ( perm.AccessControlType == AccessControlType.Deny)  
     var acl = new Acl() {Permission = perm.FileSystemRights, Name = perm.IdentityReference.ToString()};  
   retAcls.Acls = acls.ToArray();  
   return retAcls;  

This snippet is where I am copying the permissions for a given folder into my own lightweight structure, so that I could do queries on the structure to help me create the script.

One thing in particular I remember about doing this in Win32 was once I got the SID for a particular identity, it was a pain to resolve that to a name.  Now it is just the IdentityReference.Value.

This is the type of value I like.  Now if I just had a scripting language to do this in so I did not have to compile it would be all set.  Of course there are bunch out there – just I am not as proficient in them as I am in C#.  Hmmm.

IIS 7.5 and 2 Level Auth

We use a large vendor application at work.  We host all the infrastructure for the application inside the firewall, so there is absolutely no access from the Internet.

In IIS6 we configured 2 level authentication – NTLM and Forms Auth.  The vendor requires Forms Auth for the application.  Given the importance of this application and sensitive nature of the data; I also enabled NTLM and secured the site to only people in our division (about 450 people).  There are about 150 logins in the application meaning that 300 people have access to the site; even though they will not be able to actually see any screens until they login.

Through a series of discussions with different audiences; it was decided that there is still enough of a risk of those 300 people being infected with something that takes advantage of cross site scripting or other classic vulnerabilities.  So I further locked down the site using a more restrictive group.  While I feel like we are being a little paranoid about, I capitulated.

Enter IIS7…


Our standard for servers is Windows 2008r2 so we are on IIS7.5.  Doing this same 2 level authentication on IIS7.5 did not work.  Why?  Well because of the integrated pipeline…it simply cannot not do both at the “same time”.  One has to come first.  In IIS 6 NTLM always came first since that was done my IIS and then Forms Auth since that was done by ASP.NET.

There are a couple of hacks out there that describe how to work around this.  One of which I found posted here by Mike Volodarsky (formally of the IIS team).  Here he talks about a way to make this work by splitting up the authentication and forcing one to happen before the other.  I was up until well after midnight last night trying to consider how I would make this work given that the application is a vendor application and I don’t have the source code.  Not to mention that everything is precompiled, signed and obsuficated.  All of which add up to…this would be really hard to hack.

Finally, after a bit of chin rubbing…I came to the conclusion that the integrated pipeline may not be the problem at all.  Why do I even still need NTLM?  I mean if the only way for someone to access a web page on the site is to have a valid Forms Auth token then do I really need to force them to also have an NTLM token?  I went to bed content that I just need to leave NTLM behind in this case.

Now I just need to convince everyone that was pushing the original requirement for 2 level authentication that I don’t need it anymore.  Being that they don’t really understand the technology very well – that could be a challenge.  Since the way we got here was through a vulnerability scan of the web site in the first place – perhaps requesting another one will demonstrate my point and I won’t have to make them understand the why.

I will post an update on the outcome.

TFS Recovering Shelveset for Invalid User

One of the developers on the team was getting a TFS error (below) yesterday while trying to access a shelveset for a developer who left the account a couple of months ago.  Turns out he needed some of the code on the shelveset.

 TF50605: There was an error looking up the SID for TC30014  

Note to self…shelvesets are probably not the way to do this, thinking that a branch would better construct.  I want to encourage them to be doing more of these anyway.

The problem is because TFS is going to lookup this user in the Active Directory and the user does not exist anymore.  I can see the shelvesets in TFS using either TFS itself or TFS sidekicks.  I included a screen print of all the active shelvesets for this user in sidekicks.


So TFS must be doing some lookup on the user and when it does not find it – errors out.  Not knowing how to solve this, I put couple of searches later (“tf50605 -vssconverter”) out there and found this article.  While not directly what I needed it was enough information to crank up SQL Server Management Studio and start poking around a bit.  So I started with the OwnerId for user that was removed and for the user who was trying to get the code.

 SELECT IdentityId FROM tbl_Identity WHERE (DisplayName LIKE 'ad2\TC30014')  
 SELECT IdentityId FROM tbl_Identity WHERE (DisplayName LIKE 'ad2\RMxxxxx')  

Once I had this I plugged the deleted user’s id into the following query to get all the workspaces.

 SELECT TOP 1000 [WorkspaceId]  
  FROM [TfsVersionControl].[dbo].[tbl_Workspace]  
  WHERE OwnerId = 276  

This showed me a bunch of workspaces.  What I noticed is that evidently the shelvesets and workspaces are used in a similar model based on the type.  So a little bit of infering and playing in TFS and it looks like if I hack this table, I can reassign all the shelvesets to a valid user (which is sort of the spirit of the article above).

Leaving out some of the details, I ended up with the following query that reassigns the orphaned shelvesets (type=1) from one owner to the other.  Since the WorkspaceName is part of the primary key (and relatively short), I changed the name so that the new owner could distinguish between his shelvesets and those that were reassigned.

  UPDATE [TfsVersionControl].[dbo].[tbl_Workspace]  
  SET OwnerId = 123,  
    WorkspaceName = RIGHT(WorkspaceName + '-Reassigned',64)  
  WHERE OwnerID = 276   
    AND Type = 1  

Looking back at TFS Sidekicks (I verified it first in SSMS – wink) I could see that the more recent shelvesets had indeed been reassigned.  Sucess!!


Now granted, we are on a relatively old version of TFS; so this hack may already be obsolete. But I wanted to put it out here just in case.