WMD Markdown Editor SyntaxHighlighter IntegrationApr21

Wednesday, 21 April 2010 by haemoglobin

For my blog, I use SyntaxHighlighter to format code snippets (which I believe to be the best code formatting solution out there for a few reasons). The problem is however that it requires you to wrap your code in <pre class="brush: c#"> your code goes here </pre> tags – which can be a pain.

The WMD Editor control (or the stackoverflow open source port) on the other hand has a convenient button for "code" (or using the Ctl-K hotkey) that will indent the snippet in such a way that when rendered by Markdown (whether in the WMD Editor’s preview pane with the server side MarkdownSharp for example) it will be wrapped in <pre><code>your code snippet</code></pre> tags.

This format won't be highlighted by SyntaxHighlighter by default however.

Without wanting to modify SyntaxHighlighter my solution to this was to use jQuery to transform all instances of:

<pre><code>code snippet</code></pre>

rendered on the page to:

<pre class="brush: c#">code snippet</pre>

After this has been done, SyntaxHighlighter will be able to format the code snippet as it usually would.

I’ve created the following jQuery snippet to do this (which also goes to demonstrate the power of jQuery):

	//Find all <pre> tags that have a <code> tag child, and add the "brush: c#" class to the <pre> tag.	
	$('pre:has(code)').addClass('brush: c#'); 
	//Find each <code> tag (with a <pre> parent), and replace it’s contents with itself leaving only the <pre> tag behind. 
	$('pre>code').each(function(index) { 
		var cnt = $(this).contents(); 	
		$(this).replaceWith(cnt);    
	});  

The end result being you will have beautiful syntax highlighting or your code snippets with a simple click of the WMD Editor's 'code' button (or the Ctl-K hotkey).
If you wish to use another brush, you can still explicitly define a <pre> tag as normal specifying a different SyntaxHighlighter brush.

ASP.NET Wiki Control with Markdown

I am hosting an open source project that wraps the WMD Editor and MarkdownSharp projects together into a convenient wiki control over Linq to SQL that you can embed in any ASP.NET website (also with special instructions for installing into BlogEngine.NET for a better page editing solution).

I have just updated the latest version to include SyntaxHighlighter integration and will use javascript to detect for the presence of jQuery and SyntaxHighlighter and, if found, will run the jQuery code mentioned above to format your code snippets automatically through SyntaxHighlighter.

More information on the control can be found on the ASP.NET Wiki Control with Markdown download page.

Categories:   Development
Actions:   E-mail | Permalink | Comments

Web Farm ConsiderationsApr8

Thursday, 8 April 2010 by haemoglobin

Background

I have been involved with ASP.NET deployments for some high load scenarios where the same exact copy of a web application is deployed to multiple servers and placed behind a network load balancer (NLB) – also known as a web farm.

Requests to the website first arrive at the NLB, and are then directed off to a web server that is available to handle the request (the web server allocated depends on a number of load balancing algorithms the NLB server can use).

When deploying an ASP.NET application to a web farm for load balancing there are a few things to consider.

Session State

Session information in ASP.NET simply speaking is the ability to associate some information against a particular user as they are clicking through your site by using the .NET Session object. Behind the scenes the user session is tracked through a single non-persistent cookie (removed when the browser is closed) named ASP.NET_SessionId. The cookie contains a single ID that associates that user back to their session information on the server.

The actual information can be stored on the server in one of three ways – but only two of which will work when deploying to a web farm:

InProcThis is the default. InProc stores the session information in memory of the ASP.NET worker process on the server. This cannot be used with load balanced servers since if the user is sent to another server on a subsequent request, that server will not have that user’s session information available in its local worker process memory space.

StateServerWith this option, all web servers in the web farm store session information in the memory of a single process (a Windows service called “ASP.NET State Server”) preferably on a separate machine. This option works for web farms, but is 15% slower than InProc. This however of course is offset by having the extra servers processing requests.

SQLServer Using SQL Server, session information is serialized and stored to a SQL Server database. This is 25% slower than InProc, however SQL Server can be setup as a failover cluster which has reliability advantages over StateServer which is a single point of failure. This option can also survive a SQL Server restart; if StateServer on the other hand is restarted all session information will be lost.

Whether you choose StateServer or SQLServer is a trade-off between reliability and speed.

Machine Key

ASP.NET hashes ViewState by default to ensure that it hasn’t been tampered with before being sent back to the server. Out of the box the key used to generate the hash is a random one generated when the application pool starts. There is also a decryption key that is used to encrypt forms authentication tokens (also auto generated).

This has two problems:

  1. If the Application Pool restarts (by sysadmin or automatically through inactivity or scheduled recycling) a new validation/decryption key will be generated, and a user on a subsequent post back will receive a ViewState validation error or be logged out.
  2. If the user is sent to another server in the web farm on post back, the other server will have it’s own validation/decryption key, and the user will receive a ViewState validation error or be logged out.

So in order to work with a web farm it is necessary that all servers in the farm use the same validationKey and decryptionKey values and not have them auto generated.

A good codeproject article describes how to do this here http://www.codeproject.com/KB/aspnet/machineKey.aspx

<machineKey 
validationKey="E36B… CE0260028"
decryptionKey="DEF1F… 7F7C0BCCF85"
validation="SHA1" decryption="AES"/>

Also note that it is possible to encrypt ViewState (instead of just making it tamper proof) by setting the validation property to 3DES. This however incurs a performance hit so it is better to store sensitive information in the Session to avoid having to do this.

Storing / Sharing Files

Another problem exists when dealing with file uploads / downloads (when using the file system and not storing files in the database). All servers in the web farm need to be pointing to a common directory for the file store, otherwise if they are pointing to local directories the files will be split across them and sometimes exist and sometimes not depending on what server the user was sent to.

This can be overcome by setting up IIS Virtual Directories on the websites in the web farm that point to a common network share (on a file server somewhere on the network).

In order to test this concept locally for this blog post, I first created a directory c:\Test and turned on network sharing on this directory so that I can now browse to it through the UNC path \\HAEMOGLOBIN-PC\Test (in practice this would point to a share on another machine).

Within IIS7, the virtual directory “Uploads” can then be added that points to this share and will look something like this:

image

This allows the network share location to be maintained and controlled by system administrators – the only other thing to be aware of is setting up the correct permissions for the file share to be accessed and written to. This will depend on whether identity impersonation is enabled or what account the anonymous user is running as.

Coding against this is simple, saving a file is simply a process of using MapPath and the virtual directory name (this allows the system administrator to change the UNC location at a later point through IIS):

FileUpload1.SaveAs(MapPath(@"/Uploads/" + FileUpload1.FileName)); 

The above will be saved by IIS to \\HAEMOGLOBIN-PC\Test (as configured before and this could be on another machine) which in turn resolves to c:\Test on that machine.

In terms of providing a link to download a file - it would simply look like:

<a href="/Uploads/uploadedFile.zip">Uploaded File</a>

This will (once again through the IIS virtual directory mapping) download the file from the common file share location on the network and work for any server the user happens to be on in the web farm.

Hope this helps :)

Categories:   Development
Actions:   E-mail | Permalink | Comments

Versioning a Multi-Component SystemJan21

Thursday, 21 January 2010 by Haemoglobin

1  A lot of the systems that I have worked on have involved more than one component. For example, instead of just an ASP.NET website, there would also be some sort of windows service, maybe a windows forms component, or independent modules and libraries working within those components – all working together to form a final solution.

This tends to create a bit of a versioning and deployment nightmare and you need to have a good solid strategy around it before launching into things. I have tried a couple of approaches.

For one project, I had an automated build script that would go through and modify the version number of all the assemblies, build all the components and label the version from the root in source control. All components would then be redeployed for every release, regardless of whether they have changed or not. This removed human error in deployments as everything was automated, and the deployment steps were always exactly the same for each release so even those could be automated.

This worked pretty well for this project, because there were so many developers that it would have been too time consuming to do a full version to version diff to figure out which components have changed for each release and to deploy only those. Yes there would be release notes on what has changed for each of the components as written by project management or the developer, but that would be reliant on how dutiful Mr developer happened to be at the time, and it’s possible little “tidy” ups were done, or refactorings in components that do not end up appearing on release notes. So one answer is just to test and redeploy everything (this is where automated testing can be very useful also to test for regressions across the system). The only problem with it is you potentially end up deploying a lot of unchanged stuff – albeit just with updated version numbers, all for potentially one small change in one of the components.

For another project – there were far fewer developers, and it was quite easy to just know what components were being changed for each release – so it was possible to just deploy those that were being updated. I came up with the concept of a system version.

Say if you have the following structure in your source control repository for your software solution:

  • MySolution
    • ASP.NET Website
    • Windows Forms Application
    • Windows Service
    • Reporting Services Reports
    • Database Scripts

When you deployed the first version to production, imagine everything was versioned at 1.0.0. All assemblies / exe’s would have been deployed as version 1.0.0 (or using other techniques for database scripts like updating the database version row to this), and the label Version_1_0_0 (or similar) then applied to the MySolution root in source control.

This is in contrast to labelling each subcomponent separately in source control and keeping a configuration management register updated with what versions are deployed where and what versions are all working and tested with what other component versions – uggh.

Now – say that a bug in the windows forms application is reported – it makes sense to just send out a new version of just that, not everything. You make the change, and update the windows forms application’s version to 1.0.1. Now, the solution is is labelled with Version_1_0_1 from the root where 1.0.1 is effectively the new system version. Note however that the rest of the sub-components are still set at version 1.0.0 (literally within the AssemblyInfo.cs classes) – but the 1.0.1 label spans across them, effectively grouping the various current sub-component versions under the 1.0.1 system version umbrella.

Now – lets say a whole lot of work is done across all the different components to create a new feature asked for that will become version 1.1.0. During this time, another small issue is found in production – this time in the windows service. Since we are in the middle of development of version 1.1.0, which is subsequently unstable – we must branch the solution from version 1.0.1 (using the source control branching functionality). Here we make our change to the windows service, update it’s assembly version to 1.0.2 and then label the root of the branch as system version 1.0.2. We then deploy just the windows service to the client.

If we were deploying this solution to multiple clients, who for some reason were running on different versions, you can easily recreate in the development environment the version that they have running in their environment as long as you know their system version. Note that this must be recorded somewhere – but in this case it’s just one number per system deployment, as opposed to a complex set of versions of all the components installed. Pulling that label out of source control will retrieve all the subcomponents in their correct version as deployed for that system version, and consequently retrieve exactly what is running in production for each component.

The only danger with this technique is the possibility of a component being changed (maybe a developer deciding to re-factor some previous work), and not having this component deployed along with the next system version. This would effectively create a mismatch between what is deployed and running on the client’s system and the code associated with that system version in source control. This creates unreproducible bugs and other issues. To solve this, a quick and easy technique should be devised to query source control to see what components have been “touched” since the last release.

I would be interested to hear how others go about versioning in these sorts of situations. I may make a follow up post at some point about database versioning, and UAT deployment techniques.

Categories:   Development
Actions:   E-mail | Permalink | Comments

Monitoring ADSL Drop OutsNov18

Wednesday, 18 November 2009 by haemoglobin

At home we have previously had issues with our ADSL dropping out intermittently and have had the ISP send technicians around to fiddle test and rewire etc. This seemed to make a difference for a good while, but as of the last two days we are back to square one with frequent and annoying ADSL disconnections (no changes to anything in the house).

What I’m afraid of doing now is ringing the ISP again and being told the standard “restart the router”, “pull all the phones out of the wall”, “turn the oven off”/”Pray to the Gods” etc etc, all of which I know most likely has nothing to do with it (tried it all before).

I decided to do my own internet research for reasons why ADSL might drop out. I learnt all sorts of interesting things about noise margin’s, line attenuation, data sync rates etc that all have an impact on how stable the line is.

Checking my Belkin router’s status page, I note that it actually gives me this information - cool:

image

Of course, I start rapidly refreshing the page to see how the numbers are changing as the ADSL internet connection drops in and out. The Data rate is different on each reconnect, and the noise margin is fluctuating all over the place.

Refreshing is no good, tonight I decided that this goodness needed to be graphed!

Window’s inbuilt performance counters are ideal for this sort of thing. We just need a custom performance counter for the ADSL connection data, poll the router’s status page and feed the new performance counter with data.

I fired up Visual Studio and started plugging away at a windows app that will do this for me – easy to do – behold the ADSL Monitor:
image

The following code creates the new ADSL performance object, with the four counters I’m interested in (noise margin up/down and data rate up/down):

 

CounterCreationDataCollection counters = new CounterCreationDataCollection();

counters.Add(new CounterCreationData("Noise Margin Down", "Noise Margin Down", PerformanceCounterType.NumberOfItems64));
counters.Add(new CounterCreationData("Noise Margin Up", "Noise Margin Up", PerformanceCounterType.NumberOfItems64));
counters.Add(new CounterCreationData("Data Rate Down", "Data Rate Down", PerformanceCounterType.NumberOfItems64));
counters.Add(new CounterCreationData("Data Rate Up", "Data Rate Up", PerformanceCounterType.NumberOfItems64));

string _performanceCategory = "ADSL"; 

if (PerformanceCounterCategory.Exists(_performanceCategory))
{
PerformanceCounterCategory.Delete(_performanceCategory); 
}

PerformanceCounterCategory.Create(_performanceCategory, "ADSL Diagnostics", PerformanceCounterCategoryType.SingleInstance, counters);

 

I then fire off a thread and have it looping over the following every second:

	           
	while (running)
	{
		WebRequest request = WebRequest.Create("http://192.168.2.1/status.stm");
		WebResponse webResponse = request.GetResponse();
		StreamReader stream = new StreamReader(webResponse.GetResponseStream());
		string statusPage = stream.ReadToEnd();
		stream.Close();
		webResponse.Close();

		//Providing data to the performance counter 
		PerformanceCounter pc = new PerformanceCounter(_performanceCategory, "Noise Margin Down", false);
		pc.RawValue = Convert.ToInt32(GetDataResult(DataMatch.adsl_noise_margin_ds, statusPage));
		...

		Thread.Sleep(1000); 
	}

A simple Regex is used in GetDataResult to pull the appropriate data off the router’s status page.

Now, browsing to the inbuilt windows Performance Monitor, we can add our newly created ADSL performance object and graph what is really going on. For me, it was looking like this:

image

What the ! !  It seems more than half the time I’m without internet. The most important lines I believe are the blue and red. The blue shows the synced data downstream rate (higher the better), and the red shows the downstream noise margin (once again, higher the better – apparently anything below 6 is quite unstable).

Where the blue line is at rock bottom is where I don’t have any internet at all. What a pain – as you can see towards the end it started to stabilise with quite a high noise margin, albeit a low data rate :(
This will hopefully give me more information to provide the ISP however when I finally do ring as well as letting me monitor the situation and not even bother surfing the internet while it is ridiculously unstable. 

Sigh.

Categories:   Development
Actions:   E-mail | Permalink | Comments

Enabling HttpOnly & RequireSSL on CookiesSep12

Friday, 12 September 2008 by Haemoglobin

Unless you have any specific reason to access cookies from javascript - it is a good idea to turn on HttpOnly on the cookie to help prevent cross site scripting attacks to send your cookie content elsewhere - have a read of Jeff Atwood's post here: http://www.codinghorror.com/blog/archives/001167.html

On top of this however, there is another setting that is not mentioned, and that is RequireSSL. This will mean that the web browser will only send the cookie to the website if it is requested over SSL. 

The idea of that didn't make much sense to me - since surely the user would need to purposely change it from https to http for this to be an issue, until you consider the following scenario (it still seems like a long shot, but hey):

1) You log in as as administrator on the HTTPS site.
2) The website sets a cookie on your browser with your administrator session token. This is transmitted encrypted for each request back to the website.
3) While your session is open you browse to dodgy website (or directed to it somehow, by the hacker)
4) Dodgy website redirects you to the same site you logged in as an administrator, but using HTTP instead of HTTPS. The site might throw an error at this point saying that it can only be accessed over HTTP but the damage has already been done as per the next point (there might be ways of hiding this web request from the user however).
5) Your administrator session token in the cookie is now submitted across the network in plain text.
6) Dodgy hacker now intercepts this through a man in the middle attack (using ARP poisoning or similar)
7) Hacker now has your administrator session token and can use this to browse the site as yourself. 

Now, I think there are further protections the HTTPS site could make, for example, check that the I.P address & browser user agent remain consistent for a particular session - however it is possible to fake these as well. 

Seems difficult to achieve, but hmm - if you have an HTTPS site, just put the following line in the web.config and you will be right (at least for all browsers that support it):
<httpCookies httpOnlyCookies="true" requireSSL="true"/>

This will set the options on all cookies leaving the site - or you can turn the settings on each cookie individually, or on the FormsAuthentication component like so.

It's an interesting example to think about however, as it gets you thinking about all things security after that.

[Update: It pays to also be aware of XSRF]

 

Tags:   ,
Categories:   Development
Actions:   E-mail | Permalink | Comments

Securing Static (non-ASP.Net) FilesAug13

Wednesday, 13 August 2008 by Haemoglobin

When you have forms authentication setup in ASP.NET, you might have a folder containing static content, such as PDF/zip files etc. You might try to protect these files using the following item in the web.config to allow access to authenticated users only:

            <authorization path="PDFFolder">
                  <deny users="?"/>
            </authorization>

This will not work however since IIS serves non ASP.NET file types directly, and will not pass the request through to ASP.NET to carry out any authentication first.

In order to protect all static content in the site (if you have any), you need to setup the following Wildcard application map on the virtual directory to pass all file types through to ASP.NET so that everything is protected:

It is important however that “Verify that file exists” is NOT ticked, since there are some ASP.NET files such as:

  • WebResource.axd
  • Trace.axd

Which don’t physically exist.

WebResource.axd it seems is responsible from .NET 2 onwards for dishing up ASP.NET framework js files, (in 1/1.1 these were served from the C:\Inetpub\wwwroot\aspnet_client\system_web\1_1_4322 folder)

If it is ticked, you will have missing javascript object errors all over the place (because IIS will scrap the request to WebResource.axd) and may be tempting to incorrectly think the empty C:\Inetpub\wwwroot\aspnet_client\system_web\2_0_50727 folder is where the problem lies.

So there you go.

Also, if you are curious what AXD stands for which of course I was, here it is from the man himself:

“Hi Wouter,

I'm somewhat embarrased to say that I don't think it stands for anything. I think we choose it because it sounded cool, and used the leters a & x -- which we usually incorporate in other file extension names.

Hope this helps!

Scott”

http://blogs.infosupport.com/wouterv/archive/2005/08/11/918.aspx
Tags:   ,
Categories:   Development
Actions:   E-mail | Permalink | Comments

Powered by BlogEngine.NET 1.6.1.0 | Design by styleshout | Enhanced by GravityCube.net | 1.4.5 Changes by zembian.com | Adapted by HamishGraham.NET
(c) 2010 Hamish Graham. Banner Image (c) Chris Gin