# Does RX Switch Dispose old SubscriptionsOct27

Saturday, 27 October 2012 by haemoglobin

I am currently lucky enough to be making use of RX at work, and I must say – as with most people who have the chance to use it, I love it.

It is amazing to be able to pass an observable around your application, combine it with other observables, subscribe to the result and have events pumped straight to you when and how you want. You hear the term composable banded about but without seeing it done it’s hard to appreciate.

A recent requirement was for a screen to subscribe to push updates from the server every time a set of data is retrieved. Whenever a new set of data is retrieved, then a new subscription for push updates is setup, at the same time we no longer need to continue receiving the updates from the previous set so we need to ensure that this is torn down.

The RX Switch operator looked like a perfect operator to take care of this – however I wanted to make sure that the old subscription is disposed of so we are not keeping needless connections open.

The behaviour of Switch() is easy to test:

The above code will create two observables, i being 1 then 2 in each case. We are looking to see the first observable is being disposed i.e the one with i being of value 1.

This passes and shows that with the Switch operator, the subscription to the first observable is indeed disposed of when the second observable is created and used as the new subscription. If Switch() is replaced with Merge() for example, then understandably both are kept alive and the test fails.

# RE: IQueryable vs. IEnumerable in LINQ to SQL queriesMar27

Tuesday, 27 March 2012 by haemoglobin

I came across an interesting blog post today (admittedly an old one!) from Jon Kruger who experimented with some interesting behaviour differences between IQueryable<T> and IEnumerable<T>. The blog post can be found here, however the summary of the findings are below:

NorthwindDataContext dc = new NorthwindDataContext();
IEnumerable<Product> list = dc.Products
.Where(p => p.ProductName.StartsWith("A"));
list = list.Take<Product>(10);
Debug.WriteLine(list.Count<Product>());  //Does not generate TOP 10 !!

and

NorthwindDataContext dc = new NorthwindDataContext();
IEnumerable<Product> list2 = dc.Products
.Where(p => p.ProductName.StartsWith("A"))
.Take<Product>(10);
Debug.WriteLine(list2.Count<Product>()); //Works correctly


and

NorthwindDataContext dc = new NorthwindDataContext();
IQueryable<Product> list3 = dc.Products
.Where(p => p.ProductName.StartsWith("A"));
list3 = list3.Take<Product>(10);
Debug.WriteLine(list3.Count<Product>()); //Works correctly


The first example will neglect the very important TOP 10 statement from the generated SQL query, returning all rows into memory and then returning the first 10 from there (obviously not ideal). The next two correctly include the TOP 10 statement returning only those rows from the database.

The reason the first statement fails is the call to Take is actually calling the IEnumerable<T> extension method from Enumerable. In ILSpy, this has the following implementation (TakeIterator being a private method returned from Take):

private static IEnumerable<TSource> TakeIterator<TSource>(IEnumerable<TSource> source, int count)
{
if (count > 0)
{
foreach (TSource current in source)
{
yield return current;
if (--count == 0)
{
break;
}
}
}
yield break;
}


In the second and third examples however, Take is called on a IQueryable<T> which executes the extension method defined in Querable. This has a totally different implementation as ILSpy shows below:

public static IQueryable<TSource> Take<TSource>(this IQueryable<TSource> source, int count)
{
if (source == null)
{
throw Error.ArgumentNull("source");
}
return source.Provider.CreateQuery<TSource>(Expression.Call(null, ((MethodInfo)MethodBase.GetCurrentMethod()).MakeGenericMethod(new Type[]
{
typeof(TSource)
}), new Expression[]
{
source.Expression,
Expression.Constant(count)
}));
}


As per the MSDN documentation, calls made on IQueryable operate by building up the internal expression tree instead.
"These methods that extend IQueryable(Of T) do not perform any querying directly. Instead, their functionality is to build an Expression object, which is an expression tree that represents the cumulative query. "

When Count is called in the last two examples, Take has already been built into the expression tree causing a TOP 10 to appear in the SQL statement. In the first example however, Take on IEnumerable<T> will start iterating the IQueryable returned from Where, which does not have “Take” in it’s expression tree, hence the behaviour.  If you are wondering why an IQueryable can be cast to an IEnumerable as in the first two examples, IQueryable extends IEnumerable, IQueryable hence being pretty much the same interface but provides a few extra properties to house the LINQ provider and internal expression tree. The actual extension methods off each of these interfaces however are quite different.

Another thing that helps when thinking about LINQ queries is they in effect execute from the last call in, not the other way around like most traditional method call chains. So for example, calling Count will start enumerating over the IEnumerable returned from Take, which itself will enumerate over the IEnumerable returned from Where and so on, depending on how many LINQ operators you chain together.

# Enabling TeamCity Push to GitHubMar26

Monday, 26 March 2012 by haemoglobin

The following documentation describes what is necessary to run commands such as git push to a GitHub repository from a build within TeamCity.
Without the correct SSH configuration, any call to remote git repositories in the build script will cause the agent to hang the build while it is attempting to ask the user to add the remote host to the ~/.ssh/known_hosts file.
This is the message that looks like the following (requiring user input), interaction which is not possible from the build:

The authenticity of host 'github.com (207.97.227.239)' can't be established.
RSA key fingerprint is d2:80:ef:7a:71:4b:92:89:c7:3d:fb:e6:f5:26:44:e1.
Are you sure you want to continue connecting (yes/no)?

This text will also not be written to any build output or logs (due to the blocking call) making it difficult to diagnose.

When we build our NuGet packages, we build them off a branch. This is because they are released libraries and we need a reference back to code that is in use in UAT/PROD etc. The source symbol paths that we write into the built PDB's (as per my last post on GitHub Source Symbol Indexing) are indexed with HTTP references back to GitHub on that branch, this is so source symbol stepping into our libraries will work for any developer using our libraries (and without them needing to have Git installed).

The source files that are downloaded into the developers Visual Studio debugging session (when stepping into our code) has an auto-generated file header describing when that file was built and what version of the library it is from.
The header looks similar to below, varying depending on the file:

// YourLibrary SDK
// YourCompany.YourLibrary\Config\ConfigureWindsorBuilder.cs
// YourCompany.YourLibrary, Version 3.20.0.114, Published 07/03/2012 17:06
// ----

If you are interested, the powershell script we are using to do this can be downloaded from here.

Since we do this as part of the build we need the build to push these changes back to the repository itself so the source stepping will download the file from GitHub with the added headers.
Build agents are able to interact with the repository if the TeamCity checkout mode is set to Automatically on agent.

The rest of this documentation will describe what steps are necessary to ensure the build agent is setup to support communicating with the remote repository and how TeamCity itself needs
to be configured.

The official TeamCity 7 documentation regarding git support can be found here: http://confluence.jetbrains.net/display/TCD7/Git+%28JetBrains%29
Mike Nichols has also written about TeamCity/GitHub interaction in his blog post here.

## Configuring TeamCity for Agent Side Checkout

### Steps

• Attach a Git VCS root, configure the VCS root name, Fetch URL, Ref name (branch), User Name Style as usual.
• Authentication Method needs to be set to "Default Private Key".
• This is the only supported method for agent-side checkout with SSH.
• Ensure "Ignore Known Hosts Database" is ticked.
• This saves any potential hassle with the Java/Windows home drive mismatch below.
• Everything else can be left as default, including "Path to git" which should be set to %env.TEAMCITY_GIT_PATH%.
• Back in the main VCS Settings, ensure "Automatically on agent" is selected for the CVS checkout mode.
• Currently, "Clean all files before build" needs to be checked to avoid a hanging build due to a TeamCity issue.
• This should be solved in TeamCity 7.1 and there is already a patch available.

## Configuring TeamCity Build Agent

Any agents that do not have git installed will appear *incompatible* with regards to this configuration, the incompatible agent requirement message env.TEAMCITY_GIT_PATH exists will be displayed until git is installed on the agent and the TeamCity agent service restarted.

### Steps if Git is not already installed and configured on the Agent

• Follow http://help.github.com/win-set-up-git/ to install msysgit on the build agent.
• Choose the "Run Git from the Windows Command Prompt" option which enables us to call git commands easily from the build.
• Using Git bash, generate ssh keys to the default "home directory" location (~/.ssh), with no passphrase (TeamCity does not support passphrases for Default Private Key authentication).
• Add the public key to your GitHub account as per the instructions.
• Run the following commands to ensure the agent appears correctly in the Git history when it makes commits (this is stored in the file ~/.gitconfig):
• git config --global user.name "Build Agent"
• git config --global user.email "buildagent@db.com"

NOTE: ~ above refers to the home directory, on Windows this is the concatenation of the HOMEDRIVE and HOMEPATH environment variables.

### Bypass Known Hosts Check

It is the request to add the remote host to the list of known hosts that hangs the build during git calls involving remote repositories. If the remote host has not already been added by manual means on the agent (an entry will be in ~/.ssh/known_hosts if it has) then the following needs to be done so it is added automatically without requiring user interaction when the first request is made by the build.

#### Steps

• Create a file named "config" (no extension) under ~/.ssh and add the following lines:
• Host github.dbatlas.gto.intranet.db.com
• StrictHostKeyChecking no

### Check Java/Windows Home Directory Mismatch

Since TeamCity uses a Java implementation of SSH for the initial checkout, java considers the home directory (and hence the directory it looks for ssh keys) to be one level up of the the value of the registry key HKEY_CURRENT_USER\\Software\\Microsoft\\Windows\\CurrentVersion\\Explorer\\Shell Folders\\Desktop.

In a corporate environment this can sometimes be different to the HOMEDRIVE and HOMEPATH environment variables that git used above to write the ssh keys to.

This is described here. If there is a mismatch, you will likely see an Authentication failed error during the agent side git checkout since TeamCity will be looking for the SSH keys in the incorrect home directory location.

#### Steps

• Within TeamCity, find the agent and view the "Agent Parameters". Under System Properties, check the variable *user.home* (this is the home directory that java considers it to be).
• If this is different to where the ssh keys were created in the previous steps you have two options:
• Copy the .ssh folder to this location as well, or
• Create a blank .ssh folder here and add a file "config" (no extension) including the following line which maps to the private key created previously:
• IdentityFile {path_to}\id\_rsa

NOTE: In the {path_to} replacement above, don't use any mapped networked drives, use the full network path instead. Also relative directory locations do not seem to work here.

### Check Home Directory Variables Available to Agent

Since the git installation actually adds c:\Program Files (x86)\Git\cmd to the path, when you call git from a build script you are in
effect really calling c:\Program Files (x86)\Git\cmd\git.cmd which is another batch file.

Git.cmd combines the %HOMEDRIVE% and %HOMEPATH% environment variables into a %HOME% variable which git.exe uses for the location of the
.ssh keys, .gitconfig and config file as configured above.

If the TeamCity agent is running as a service (as opposed to started from the command prompt through c:\BuildAgent\bin\agent start), these variables may not be available and our calls to git in the build script will fail.

#### Steps

• In c:\BuildAgent\conf\buildAgent.properties add the line:
• env.HOME={path to user home directory containing the .ssh folder and .gitconfig}

IMPORTANT NOTE: Remember to use escape backslashes in this file for TeamCity to process these correctly, so c:\Users\MyHomeDirectory becomes c:\\Users\\MyHomeDirectory.
Also, do not use any network mapped drive names as this does not seem to resolve, i.e instead of X:\, use \\\\SERVERNAME\\NETWORKSHARE\\USERSHOMEDIR

## Considerations when calling Git from a batch file in the build

Always ensure you begin any calls to git in a batch file with call.

In a Windows batch file, any call to another batch file will not return unless you use the call keyword beforehand. So prefer to use call git push over git push for example.

## Debugging

The log files in c:\BuildAgent\logs can all have useful information when trying to resolve the issues discussed above, specifically the files teamcity-vcs.log, teamcity-build.log and teamcity-agent.log.

## Conclusion

As you might imagine, this all took quite a bit of fiddling around to see how to get this all going - and I must say the documentation available does seem to be vague in a lot of areas. I think it is perfectly reasonable to argue that the extra build complication may outweigh the potentially “nice to have” use case there happens to be for pushing into the repository from the build, at least until this becomes less difficult.

But I hope that this is helpful for anyone doing the same thing!

# GitHub Source Symbol IndexerJan9

Monday, 9 January 2012 by haemoglobin

Do you host your .NET library on GitHub?

Do you want to give developers who use your library the ability to step into your code while debugging, without them even needing to have git installed or your repository cloned on their machines?

Then the GitHub Source Indexer could be for you.

The GitHub Source Indexer is a powershell script that will recursively iterate over PDB files in the directory you specify, indexing the source symbol server links back to raw GitHub HTTP urls, which Visual Studio will load while debugging.

This powershell script was adapted from SourcePack (by Sebastian Solnica) which can be used if you have the full source on your machine and packed into a zip file. In his article, he also describes a lot of the science behind the source indexing of PDB files which is useful reading. Much of the source of the script is reused from SourcePack (also hosted on codeplex here) so credits and thanks to Sebastian.

There is another tool SourceServer-GitExtensions but will require Git and the repository to be installed locally (we also had trouble getting this tool working).

The first requirement before running the script is to have the Debugging Tools for Windows installed which is part of the Windows SDK which can be downloaded from here: http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=8279

As part of the installation, ensure the Debugging Tools for Windows option is selected as part of the install (install or deselect other options as you desire):

Once complete, add C:\Program Files\Debugging Tools for Windows (x64)\srcsrv (or wherever it installed itself on your machine) into your PATH environment variable.

The essential steps of using the script are as follows – all from the command prompt:

1. Find out what paths are currently indexed in the pdb file:
srctool –r YourLibrary.pdb
The output should have a original file path for every source file used in the compilation, lets assume the following for the example:
C:\git\TestRepository\YourLibrary\LibraryClass.cs
2. We want this to link to a GitHub URL such as:
Confirm this URL matches the same version of the source that was used when the PDB file was created. An real example URL can be seen here: https://github.com/Haemoglobin/TestRepository/raw/release1.0/ExampleLibrary/LibraryClass.cs
This is what the Visual Studio debugger will download into the IDE and step into.
3. Take note of the beginning of the path in the first step, which will tell the script what to strip out to replace with the GitHub URL, i.e C:\git\TestRepository
4. Run the powershell script as follows:
powershell .\github-sourceindexer.ps1 -symbolsFolder "C:\git\TestRepository\YourLibrary" -userId "Username" -repository "RepositoryName" -branch "branchtagname" -sourcesRoot "C:\git\TestRepository" –verbose
Where:
• -symbolsFolder: Directory to recurse to source index all PDB files.
• -repository: Your project’s GitHub repository name.
• -branch: The name of the branch or tag that matches the correct versions of the source files as when the PDB file was created.
• -sourcesRoot: The path that we will be stripping out from the beginning of the original file paths found in step 3 above to replace with the GitHub URL.
• -verbose: A optional powershell flag to have the script output more output information to the console
• Note: Watch out for escaping backslashes! For example use "C:\git\TestRepository" not "C:\git\TestRepository\" as the latter will escape the ending quote of the powershell parameter.
5. Run the following against the PDB file to confirm the GitHub source indexing information was written:
pdbstr –r –p:ExampleLibrary.pdb –s:srcsrv
Which will look something like the following:
SRCSRV: ini ------------------------------------------------
VERSION=1
INDEXVERSION=2
VERCTL=Archive
DATETIME=01/09/2012 11:58:22
SRCSRV: variables ------------------------------------------
SRCSRVVERCTRL=http
HTTP_ALIAS=
http://github.com
HTTP_EXTRACT_TARGET=%HTTP_ALIAS%/%var2%/%var3%/raw/%var4%/%var5%
SRCSRVTRG=%http_extract_target%
SRCSRVCMD=
SRCSRV: source files ---------------------------------------
C:\git\TestRepository\YourLibrary\LibraryClass.cs*Haemoglobin*TestRepository*
master*YourLibrary/LibraryClass.cs
SRCSRV: end ------------------------------------------------

You can now distribute this PDB file along with your library’s DLL, and the developer will be able to step into your source library code to assist their debugging session. Alternatively to distributing the PDB file with your library, you could also make it available through a symbol server (which I may save for another blog post).

There are a couple of things that the developer will need to setup in Visual Studio however before they can step into the GitHub source your PDB files reference:

1. Make sure the symbol cache directory is set to a writable location (if not running Visual Studio as administrator ensure that the non-elevated user also has permissions to write here):

2. Ensure source server support is enabled and Enable Just My Code is unticked:

That’s it!

To test this locally, copy and reference your library’s dll/pdb pair from another project, but make sure you rename your original library folder you compiled from. This is because the original file path is not actually removed in the source indexing process, so if it is still there Visual Studio will load the original path as opposed to downloading from GitHub.

Reference the library, write some code against it and add a breakpoint before the library call. Use F11 to step into the library code, and if the file correctly downloads from GitHub it should load from your symbol server cache directory, similar to below:

Good luck!

[Update: If you use Resharper, you will also be able to step into the library code on GitHub at development time as you are navigating around the code, so you don’t even need to be in a debugger session for this to happen. Nice!]

# Patterns for Accessing App.config SettingsJan6

Friday, 6 January 2012 by haemoglobin

Here are some patterns I have used for accessing App.config settings which I’ve found to be quite clean. If you introduce an interface over top of the class you will also be able to stub it out for unit testing.

Consider the following application settings:

<appSettings>
</appSettings>

In code, timers usually require millisecond values over seconds so we do a conversion here using the Lazy object showing how any calculations can be done once and when required. We can then access these with the class below:

public class AppSettings
{

public AppSettings()
{
_delayMilliseconds = new Lazy<int>(() => GetMilliseconds("DelaySeconds"), true);
}

public int DelayMilliseconds
{
get { return _delayMilliseconds.Value; }
}

private int GetMilliseconds(string configSettingName)
{
var seconds = GetSetting<double>(configSettingName);
return Convert.ToInt32(seconds * 1000);
}

public T GetSetting<T>(string key)
{
string value = ConfigurationManager.AppSettings[key];
if (value != null)
{
try
{
var converter = TypeDescriptor.GetConverter(typeof(T));
return (T)converter.ConvertFrom(value);
}
catch(Exception ex)
{
var logError = string.Format("Unable to convert configuration value for {0} to type {1}", key, typeof(T).Name);
Console.WriteLine(logError);
Console.WriteLine(ex.ToString());
}
}

return default(T);
}
}

You can either wrap each setting in its own property or access it directly based on the string key as below:

var appSettings = new AppSettings();
Console.WriteLine(appSettings.DelayMilliseconds);
Console.WriteLine(appSettings.GetSetting<DateTime>("EndDate"));

The output being as below:

15000
01/02/2013 18:30:00

Monday, 6 September 2010 by haemoglobin

Let’s play! Lately, I have been doing a fair bit of Silverlight development which forces an asynchronous style of development. After installing Resharper I would often see the message “Access to modified closure”. This happens when you have code that looks like this (note following is a contrived example but similar to the type of code you might end up writing when writing against asynchronous WCF services):


class Program
{
class Person
{
public string Name;
public string Greeting
{
set
{
Console.WriteLine(String.Format("Setting {0} to {1}", Name, value));
}
}
}

static void Main(string[] args)
{
var peopleList = new List<Person>();
peopleList.Add(new Person() { Name = "John Smith" });
peopleList.Add(new Person() { Name = "Bobby Jones" });

foreach (var person in peopleList)
{
GetGreeting(person, s => person.Greeting = s);
}

}

static void GetGreeting(Person person, Action<string> messageResponse)
{
messageResponse(String.Format("Hi, {0}!", person.Name.Split()[0]));
}
}

Resharper will actually underline the lambda expression above (s => person.Greeting = s) with the aforementioned message – sometimes this can be safe to ignore depending on what you are doing. In the above example what do you think the output should be?

No surprises actually:

But, some problems occur if the call to GetGreeting is asynchronous, as Silverlight callbacks to the server always are.

We simulate this by making GetGreeting asynchronous instead:

static void GetGreeting(Person person, Action<string> messageResponse)
{
var worker = new BackgroundWorker();

worker.DoWork += (s, e) =>
{
var parameters = (Tuple<Person, Action<string>>)e.Argument;
Person p = parameters.Item1;
Action<string> callback = parameters.Item2;

callback(String.Format("Hi, {0}!", p.Name.Split()[0]));
};

worker.RunWorkerAsync(new Tuple<Person, Action<string>>(person, messageResponse));
}


Guess what we have now? Interesting to know that it will always be the following:

Notice that it is setting the two greetings on the same Bobby Jones object. Now that’s what Resharper is warning you about when it is saying “Access to modified closure” – to understand what is going on here I recommend reading the following post:
The implementation of anonymous methods in C# and its consequences

Essentially once you know how things work under the hood it makes perfect sense – the Person object is set as an instance field on a generated class that contains the anonymous method, and its reference is updated by the foreach loop rapidly to the last person object in the list, and then by the time the asynchronous method(s) have a chance to act on it, it ends up acting on the last object in the List twice.

In the application I am working on I need to pre-load a lot of data up front and have the UI display an in progress style loading panel. To solve the problem above, as well as not to flood the server with a tonne of requests all at once, I decided to load each object one at a time and display feed back. I could have used a Queue or an Enumerator for this, however I needed to display “Loading 3 of 34” type information, keeping the current index information outside of the Queue or Enumerator was messy and repeated, so ended up implementing a Iterator class that I can use for this (a quick search didn’t reveal anything like this but it was fast enough to make – not thread safe but suitable for my use):

public class Iterator<T> where T : class
{
private int _current = -1;
public int CurrentIndex
{
get
{
return _current;
}
}

public int Count
{
get
{
return _items.Count;
}
}

public Iterator(List<T> items)
{
_items = items;
}

public T Next()
{
_current++;
if (_current >= _items.Count)
return null;
else
{
return _items[_current];
}
}

public T Current()
{
return _items[_current];
}
}

class Program
{
class Person
{
public string Name;
public string Greeting
{
set
{
Console.WriteLine(String.Format("Setting {0} to {1}", Name, value));
}
}
}

static void Main(string[] args)
{
var peopleList = new List<Person>();
peopleList.Add(new Person() { Name = "John Smith" });
peopleList.Add(new Person() { Name = "Bobby Jones" });

var peopleIterator = new Iterator<Person>(peopleList);
}

{
Person nextPerson = peopleIterator.Next();
if (nextPerson != null)
{
GetGreeting(nextPerson, s =>
{
nextPerson.Greeting = s;
});
}
}

static void GetGreeting(Person person, Action<string> messageResponse)
{
//Same as before.
}
}

I thought this was quite interesting and worthy of a blog post,

Thanks,
Hamish

# ASP.NET Reverse Proxy ConsiderationsJul4

Sunday, 4 July 2010 by haemoglobin

## What is a Reverse Proxy

During lead developer roles of previous ASP.NET projects, I’ve needed to deploy into some fairly heavy enterprise infrastructure - once needing to deal with some unexpected behaviour caused by reverse proxy’s. I’ll describe in this article some things to keep in mind when doing this to ensure everything still runs smoothly and correctly all the way from the development environment to production.

A reverse proxy has a few functions, one function is acting as the network load balancer as I talked about in my previous post Web Farm Considerations. It can also be the point that processes SSL encryption.

Many people might already familiar a standard “proxy” that they may have needed to configure their browser to use to make connections to the internet (typically from inside a company network). More specifically this is known as a “forward proxy” and is to do with outgoing traffic, i.e. instead of the browser making a connection directly to a website, it will send the request first via a proxy server somewhere in the company’s network. The proxy server may be the only machine on the company network that can access the internet and gives the company further control over blocking of certain sites, monitor outgoing requests, virus scanning, caching etc.

A reverse proxy on the other hand sits on the server side – and is essentially the opposite of a forward proxy.

## Application Path Issues

The scenario described here is when your ASP.NET application (ApplicationX) is to be deployed to a subdirectory of an already existent company wide domain (www.company.com), i.e to http://www.company.com/ApplicationX.

The user’s browser will connect to the reverse proxy which has a public address such as https://www.company.com/ApplicationX (so essentially connecting to it without realising it as opposed to the forward proxy that needs to be configured) – which will then forward the request to a server internal to that network such as http://internal05/ApplicationX to do the real processing of the request and dishing up the web content to the user. Acting as a NLB (Network Load Balancer) the request could also be redirected to http://internal0[1-4]/ApplicationX unbeknown to the user. These internal0[1-5] servers are only accessible within the company network, behind the firewall and not accessible directly from the cloud/internet.

Now – here is the issue to keep in mind when deploying in this type of scenario – it’s quite possible for servers internal01-05.company.com to be servers dedicated to ApplicationX – in this case it would seem natural to simply set the default website on these to serve ApplicationX i.e directly at http://internal05.company.com.

The problem with this is ASP.NET will send virtual addresses such as WebResources.axd (some more info about WebResource.axd in my post Securing Static (Non-ASP.Net) Files) back to the client browser as the absolute address “/WebResource.axd”. This will cause the client browser to make requests to https://www.company.com/WebResource.axd – not good ! Instead it should be https://www.company.com/ApplicationX/WebResource.axd .

To achieve this, the option is to create a virtual directory on the internal server called ApplicationX so that it is served from http://internal05.company.com/ApplicationX (matching the reverse proxy’s URL structure). ASP.NET will now return the absolute address “/ApplicationX/WebResource.axd” back to the client browser which will resolve correctly behind the reverse proxy.

In terms of linking to resources yourself, ensure that you don’t use something like Request.Url. This will work in your development environment, but when deployed behind the reverse proxy – Request.Url will actually return http://internal05.company.com/ApplicationX (as far as the ASP.NET application is concerned, it is serving content from this location as the reverse proxy has made a request to the internal server with this URL). Obviously from the client browser however any references to this address will not work since this server is behind the company firewall.

Instead, make use of the special character “~” wherever possible (in conjuction with ResolveUrl where necessary) which will resolve to “/ApplicationX”. Here are some examples (note that server side controls “~” can be specified and interpreted directly within the properties themselves without use of ResolveUrl):

<asp:Image runat="server" ID="image" ImageUrl="~/Penguins.jpg" />
<img alt="Penguins" src='<%=ResolveUrl("~/Penguins.jpg") %>' />

This will mean the client browser will request items back to http://www.company.com/ApplicationX as well as ensure everything still works in the development environment.

Using Request.ApplicationPath instead of “~” will probably cause issues if you are running your Visual Studio web server development environment from the root directory. Linking to resources from the root directory - Request.Application path returns “/” and you end up with with a request to //Penguins.jpg, which will fail, such as below:

<img alt="Penguins" src='<%=Request.ApplicationPath + "/Penguins.jpg" %>' />

The solution is to either use “~” or alternatively run your development environment from a virtual path instead of the root directory if you do prefer using Request.ApplicationPath.

In summary, before deploying to a more complicated infrastructure than a simple single server scenario, it pays to think through any potentially surprising behaviour you might see. To save potential delays further down the track it can be wise to put the time aside to setup a test environment first that includes the reverse proxy, load balancing servers etc to ensure everything still works as expected.

Good luck! Hamish

Tuesday, 29 June 2010 by haemoglobin

I have recently had an interesting experience debugging some ClickOnce issues with my WPF Inspirational Quote Management System.  Essentially, I would deploy a new version of the application - but some users after choosing File -> Check For Updates, would receive a message saying that the latest version is already installed - which I know was not the case.

I could not figure out why it would not be picking up the new version of the application. One user being a friend, I was able to work with them using TeamViewer to try a few things on their computer, including inspecting the contents of the ClickOnce installer file (http://www.hamishgraham.net/files/WPFQuoteManagement/WPFQuoteManagement.application) that they receive on their computer after browsing to the URL. I noticed that the contents of the file downloaded was quite old compared to what I would get if I downloaded from the same URL on my computer. Straight away this was smelling of caching problems.

No amounts of Control-F5 would fix this – only appending a question mark on the end of the URL would retrieve the latest version (common trick to bypass web caches), but this still does not help the WPF application which browses to the aforementioned URL (under the hood using ClickOnce) without any question mark for the update.

I needed to confirm that this was indeed a web cache issue, so I did this by firing up Fiddler, and running the WPF application’s check for updates function again. Fiddler intercepts all HTTP traffic to/from the computer from all applications and not just a web browser - very handy in this case.

On my friends computer, the WPF application would receive the following HTTP headers for the ClickOnce .application file when checking for updates:

HTTP/1.1 200 OK
Content-Type: application/x-ms-application
Last-Modified: Tue, 01 Jun 2010 05:21:36 GMT
Accept-Ranges: bytes
ETag: "aeb494e4a1cb1:0"
Server: Microsoft-IIS/7.5
X-Powered-By: ASP.NET
Date: Fri, 25 Jun 2010 04:20:46 GMT
Via: 1.1 bc5
Content-Length: 5769
Connection: Keep-Alive
Age: 343128

On my computer on the other hand, it would look like:
HTTP/1.1 200 OK
Content-Type: application/x-ms-application
Last-Modified: Mon, 28 Jun 2010 05:41:10 GMT
Accept-Ranges: bytes
ETag: "eec57e838416cb1:0"
Server: Microsoft-IIS/7.5
X-Powered-By: ASP.NET
Date: Tue, 29 Jun 2010 04:08:22 GMT
Content-Length: 5765
Connection: Keep-Alive
Age: 0

I’ve highlighted the problem areas on my friend’s HTTP response. The file was being served from “1.1 bc5” which is the name of TelstraClear’s web cache (my friend being with TelstraClear), which by the looks of it holds a version that I last modified on the 1st June. The date field (25th June) indicates when the web cache last checked that the file was still valid.

Now, here is the problem, without explicitly setting a cache expiry for the file, the web cache is at its own liberty to decide when it deems appropriate to check whether the file is still valid. Without any explicit cache expiry existent in the HTTP header, web caches use heuristics are used to determine when to check validity again by doing calculations on Date and Last-Modified… i.e if the gap between the time I last checked validity of the cached file and when the file was last modified is large, it probably means the file isn’t changing very much, so wait even longer before checking for validity again.

I found someone who has already solved this issue http://www.codeproject.com/KB/dotnet/ClickOnceContentExpiratn.aspx which I implemented, however slightly differently by adding the following line instead into the web.config under the <handlers> node:

<add name="ClickOnce" verb="*" path="*.application" type="CustomHttpHandlers.ClickOnceApplicationHandler" resourceType="Unspecified" preCondition="integratedMode"/>

I then needed to enable the IIS7 Integrated Pipeline on the webserver which ensures all file types are routed through ASP.NET runtime for the HTTP Handler to be executed for the .application file, and the cache expiration explicitly set.

Retrieving the ClickOnce .application file now comes back with the following HTTP headers:

HTTP/1.1 200 OK
Cache-Control: no-cache
Pragma: no-cache
Content-Type: application/x-ms-application
Expires: -1
Server: Microsoft-IIS/7.5
X-Powered-By: ASP.NET
Date: Tue, 29 Jun 2010 09:49:23 GMT
Content-Length: 5765
Connection: Keep-Alive

This should hopefully tell any web caches ISP’s are using that this file mustn’t be cached – meaning latest versions of the application can be deployed promptly to all users. It will take a few days until the web caches finally refresh before I can test this works as expected, but hopefully this will be a win.

Extra Resources

# Delete the Currently Playing FileApr26

Monday, 26 April 2010 by haemoglobin

For years now I’ve had songs in my music collection that are so terrible I shudder when they come on – but the process of actually deleting the file so I’d never hear it again has always been so painful I never do it. So end up listening to the thing again and again.

Applications like iTunes, VLC and Windows Media Player all work the same way by deleting the song from the play list and not the file system itself. To delete from the file system you usually have to go through some complicated process to find the file path, open windows explorer and then delete it from there.

I’ve googled a bit over time for plugins and so on that let you actually delete the currently playing song but I’ve never really had much luck so thought I would bite the bullet and make something up.

I had grand plans of having a lovely little WPF application that sits in the system tray with an assigned keyboard hotkey to delete the currently playing song. I thought I would do this by just scanning the file handles the media playing application has hold of, find the mp3 / wma file and then delete it. Since .NET isn’t very capable at getting low level file handle information without going into a whole lot of unmanaged code I thought I would just use the output from the sysinternals Handle application. This needs administrative privileges however to run, and I subsequently found out that it is impossible to run an external process from .NET with elevated privileges (contrary to many posts around the internet) – on Windows 7 with UAC enabled anyway. This guy also found this out the hard way.

So I’m sorry, I didn’t end up making a pretty application but I did end up getting something going that works (a console application) – but is a bit geeky for the average user unfortunately.

However I thought I would share it anyway. Here are some steps to set it up:

1. First of all, download DeleteCurrentlyPlaying.zip and extract to a directory. If you get an error when running it later, you may not have the .NET Framework 3.5 installed.
2. Open a command prompt with Administrative rights. This guide tells you how to do this. On Windows XP you won’t need to worry about this.
3. Navigate to the folder with the files in it e.g “cd c:\path\to\downloads\DeleteCurrentlyPlaying”
4. Now run the following if your song is currently playing in Windows Media Player, VLC Player or iTunes respectively (this is just the process name changing):
handle –p wmplayer | DeleteCurrentlyPlaying
handle –p vlc | DeleteCurrentlyPlaying
handle –p itunes | DeleteCurrentlyPlaying
5. For your piece of mind, the file isn’t actually deleted, it is moved to a subfolder “To Delete” for you to delete at some point. Also if you would rather download the source and compile it yourself, you can find it here.

The application will wait until the file has been released by the application before it is moved (in the case of Windows Media Player it seems it can be moved while it is being played).

It should look something like this:

You can just leave this console window open, and then hit the up arrow to repeat the last command and hit enter again when the next one to go comes on.

UPDATE 24/9/2011

I have found a way to make execution of this slightly easier. Create a .bat file in the unzipped directory with the following contents:
cd path_to_unzipped_dir
handle –p wmplayer | DeleteCurrentlyPlaying     (or replace wmplayer with whatever your music player process is)

Now, create a shortcut to this bat file and right click –> properties –> Shortcut –> Advanced –> Check “Run as administrator”.
The reason we need to cd to the unzipped dir in the batch file above is because when we run cmd as administrator it seems to ignore the “start in” directory and takes you to c:\windows\system32.

If you are using a productivity tool such as Launchy you can then create an entry direct to the shortcut using the Runner addin.

# Understanding Action<T> and Func<T, TResult>Apr22

Thursday, 22 April 2010 by haemoglobin

In .NET 3.5 there are two new delegate types Action<T> and Func<T, TResult>.

There are other overloads for Action<T> and Func<T, TResult> if you need to provide more than one parameter, e.g:
- Action<T1, T2, T3>
- Func<T1, T2, T3, TResult>

I wanted to test my understanding of these by making a quick console app that uses them. The example below is extremely contrived and something you would not do in practice of course but I believe it to break down the core essence of their use, so you can use your imagination from there as to how you would use them.

It actually started off much more simple but then I intentionally beefed it up including plenty of generics, and multi-lined / multi-parameter lamba’s so if you understand this snippet you know you also have a good grasp of those too.

        static void Main(string[] args)
{
int result = DoStuffToSomething<DateTime, DateTime, int>(
DateTime.Now,
DateTime.MaxValue,
date => {
Console.WriteLine(date.ToShortDateString());
Console.WriteLine(date.ToLongTimeString());
},
(date1, date2) => (date2 - date1).Days);

Console.WriteLine(result.ToString() + " days before .NET DateTime explodes.");

}

static R DoStuffToSomething<T1, T2, R>(T1 something, T2 somethingElse, Action<T1> myAction, Func<T1, T2, R> myFunction)
{
myAction(something);
return myFunction(something, somethingElse);
}

Prints:
22/04/2010
12:48:12 p.m.
2918175 days before .NET DateTime explodes.

Good to know I guess.

Also note Action<T>’s use in the framework such as the ForEach method on a generic List. This can be used like so (as taken from http://msdn.microsoft.com/en-us/library/bwabdf9z.aspx):

    static void Main()
{
List<String> names = new List<String>();

// Display the contents of the list using the Print method.
names.ForEach(Print);

// The following demonstrates the anonymous method feature of C#
// to display the contents of the list to the console.
names.ForEach(delegate(String name)
{
Console.WriteLine(name);
});
}

private static void Print(string s)
{
Console.WriteLine(s);
}

I would also add to the MSDN example to show how the same can be done using a lambda expression:

    names.ForEach(name => Console.WriteLine(name));

When these new language features come out, I think it is good practice to at least get an understanding of them, so when the right situation presents itself you have a new and possibly more efficient/readable option to use.