I’ve started a new technical blog which talks about the cloud. It’s called Forecast: Cloudy and it will feature thoughts, ideas and some code driven mostly from my experience running services on cloud infrastructure.

While it won’t necessarily talk about .NET and debugging it will feature general architectural examples as well as use cases for using various services that are also available from .NET.

The first post (after the traditional “Hello World“) already has some code :-)

Check it out and don’t forget to tell me what you think about it.

I’ve just got in the mail the new Advanced .NET Debugging book by Mario Hewardt which I had the pleasure of reviewing prior to its publication (I sure hope my comments helped a bit, if at all :-) ).

The book covers the internals of .NET, tools that can help you debug and solve issues such as, WinDbg and PowerDBG as well as cover specific and common issues that a lot of other developers encountered (heap corruption, dead locks, etc)

The book is well written and structured and I would recommend it to any .NET developer who wish to better understand how .NET works under the hood, as well as gain the necessary tools (and it IS necessary to use these tools :-) ) to debug and solve real world problems that cannot be resolved with regular debuggers such as Visual Studio.

The book fills a niche that was left empty for far too long. When I started this blog there was almost no knowledge online about advanced .NET debugging. I’m happy that Mario was the one to bring all of this information in a structured book.

Get the book here:

You might also want to check our Mario’s previous book, Advanced Windows Debugging which also talked about the use of WinDbg and other tools but in a native context. It’s also a great book to complement the .NET side of things.

Get the book and debug without fear! :-)

If you are using the AJAX Control Toolkit you’ve probably noticed the ToolkitScriptManager server control.

One of the great features of the ToolkitScriptManager that comes with the AJAX Control Toolkit is its ability to combine all client side scripts that needs to be loaded in the browser into one request, thus saving the browser’s need to issue multiple requests to the server and speeding up the process of loading the page.

The URL of that combined request contains a hash code of each script that should be combined, so if you have different combinations of client side scripts due to different combinations of controls you are using from the AJAX Control Toolkit you will have a unique URL for each such request.

“So what’s in it for me?”, you ask.

It’s simple. Since these scripts are only changed when you change the AJAX Control Toolkit version (and that only happens once in a while) and these scripts can be quite large (even when Gziped), they are perfect candidates for being delivered from a Content Delivery Network (CDN).

In short, CDN can make your site faster by delivering static content (i.e. images,scripts,static html files, etc) from a location closer to the user making the request. It does that by performing the request on behalf of the user requesting that resource, caching it for a certain period of time and distributing it to server that are geographically closer to the user.

In most CDN systems you need to change the hostname of the resource you wish to be fetched from a CDN to something that was preconfigured to work with your site. The problem with the AJAX Control Toolkit is that it is the one that renders the link to the combined javascripts and you have no control over it.

Luckily the designers of the ToolkitScriptManager class thought about a hook that allows you to change the handler URL of the combined scripts.

If you’ll set the “CombineScriptsHandlerUrl” property with a URL that refers to a CDN you’ll make these scripts get downloaded through the CDN, thus making your site load faster.

For example, instead of having the combined scripts URL look like this:

MyPage.aspx?_TSM_HiddenField_=ctl00_ScriptManager_HiddenField&_TSM_CombinedScripts_=…

It will look something like this:

http://mycdnsubdomain.mysite.com/MyPage.aspx?_TSM_HiddenField_=ctl00_ScriptManager_HiddenField&_TSM_CombinedScripts_=…

This will actually make a request to the CDN instead of going directly to your site since “mycdnsubdomain.mysite.com” is a domain mapped to the CDN (of course not all CDN networks work exactly like that, but most of them work in this manner where you map a certain subdomain or another domain to the CDN).

It’s a rather quick and easy way to boost your site’s load speed with very little effort. Keep that in mind when you need to optimize the client load side of things.

Debug.Assert lets you check various things during the debugging process or when you are a debug build.

One of the problems with Debug.Assert is that it pops up a MessageBox dialog. UI dialogs are a big no-no in server applications since there are scenarios in which this UI dialog will never be show and your server application will simply get stuck waiting for someone to push the buttons of the dialog.

One of the most common cases in which you will not see the dialog is when your server application runs under different credentials than your currently logged on user. In that case the UI dialog will hide somewhere in a different sessions without any chance of being seen by any other user.

.NET employs various huristics to figure out when to display the assert dialog. In most cases, in a server application it will not show the assert dialog at all (even in debug builds). As far as I could tell (and its currently just an calculated guess), if you have a debugger installed and registered as a debugger, .NET WILL display the dialog and that may pose a problem if your server application runs under different credentials than the currently logged on user.

It seems there is a way to overcome this by using a configuration flag.

Add the following section to your web.config (or app.config for applications other than ASP.NET) and it will disable poping up a UI window in your application which make cause it to deadlock:

<system.diagnostics>
<assert assertuienabled=”false” />
</system.diagnostics>

If you have any trace listeners that might popup a UI window you can also add the following configuration in your web.config that will clear all trace listeners:

<system.diagnostics>
<trace>
<listeners>
<clear/>
</listeners>
</trace>
</system.diagnostics>

If you are not sure this is really your problem, its very easy figuring it out using WinDbg, by attaching to the process, loading the SOS.dll and running the command:

~*e!clrstack

This command will show the managed call stack of each thread in the application (some of these threads are not managed so they might return an error and you can safely disregard them).

If you’ll see in the call stack a call to “Debug.Assert” followed by “MessageBox.Show” you can be certain that this is the problem you are facing.

Typed DataSets are a type safe wrapper around a DataSet which mirrors your database structure. It was created to make sure that the code accessing the database is type safe and any changes in the database structure that changes tables, columns or column types will be caught at compilation time rather than runtime.

If you have a big typed dataset that contains a lot of tables, columns and relations it might be quite expensive to create it in terms of memory and time.

The main reason that creating a big typed dataset is expensive is due to the fact that all of the meta data that is contained within the typed data sets (tables, columns, relations) is created when you create a typed dataset even if eventually all you’ll use it for is to retrieve data from a single table.

I can speculate that the reason all of the typed dataset meta data is created during the instantiation of the typed dataset is due to the fact that it inherits from a generic DataSet and accessing the meta data (tables, columns) can also be done in a non type safe manner (i.e. access the Tables collection and/or Columns collection of a table).

If you are using a typed dataset (or dataset in general) you might be interested in the following tips:

  • If you have a big typed dataset, avoid creating it too many times in the application. This is specifically painful for web applications where each request might create the dataset. You can use a generic DataSet instead, but this might lead to bugs due to database changes and the fact that you’ll only be able to find these bugs during runtime rather than compilation time (which basically misses the whole point of using a typed dataset in the first place).
  • DataSets (typed datasets included) inherits from the MarshalByValueComponent class. That class implements IDisposable which means DataSets will actually be garbage collected after finalization (you can read more about finalization and the finalizer thread here). To make sure datasets are collected more often and are not hagging around waiting for finalization make sure you call the “Dispose” method of the dataset (or typed dataset) or use the “Using” clause which will call “Dispose” for you at the end of the code block.
  • Don’t use DataSets at all :-) Consider using a different data access layer with a different approach such as the one used in the SubSonic project.

I guess it would be rather trivial creating a typed dataset that is lazy in nature which creates the meta data objects only when they are accessed for the first time. That would reduce the memory footprint of a large typed dataset but will make the computation used to create these objects a little less predictable. If you are up to it or already did a lazy typed dataset ping me through the contact form :-)

I’ve previously written about Internal .Net Framework Data Provider error 6 and how to obtain the hot fix to fix this issue.

It appears this hot fix never made it to .NET Framework 2.0 SP1 and there is a special fix (and KB article) to apply because the hotfix mentioned in the previous post will simply not work.

To get the hotfix for a .NET Framework 2.0 SP1 installation you’ll need to request a fix for KB948815 in the hot fix request web submission form.

In relation with my previous post about string.GetHashCode being used in AJAXControlToolkit’s ToolkitScriptManager class, I wanted to talk about object.GetHashCode in general, and string.GetHashCode specifically.

string.GetHashCode documentation states:

“The behavior of GetHashCode is dependent on its implementation, which might change from one version of the common language runtime to another. A reason why this might happen is to improve the performance of GetHashCode.”

The real conclusion from this paragraph is that you shouldn’t base your implementation on string.GetHashCode. But this is a bit too harsh since GetHashCode in general and string.GetHashCode specifically are being used throughout the runtime for internal things.

As long as you don’t share this hash code outside the boundry of the AppDomain its relatively safe to use.

The reason you cannot (or should I say should not) pass it across an AppDomain boundry is that the basic implemenetation of object.GetHashCode is to return an integer representing the reference id of the object in the .NET runtime. That reference is not guarenteed to be the same for the same object in a different appdomain/process/machine.

In the case of string.GetHashCode, where the implementation differ from the default one, you can pass it across AppDomains and even machines (though you shouldn’t count on that as well!) as long as they are in the same architecture, i.e., 32bit to 32bit and 64bit to 64bit.

All in all, the most recommend way of using x.GetHashCode is simply not using it at all. There are numerous implementations of hashing functions built into .NET (such as MD5, SHA1, SHA256, etc) which are more consistent but may be a bit more expensive computation wise.

For all your outward facing code I would recommend using one of the common and known hashing functions specifically if this is a case where the code on the other side needs to recompute the hash and compare it with the hash value being passed.

If you mix and match 32 bit machines/processes with 64 bit machines/processes in an ASP.NET load balanced environment and you are using the AJAXControlToolkit ToolkitScriptManager class you might end up with the following error:

Message: Assembly “AjaxControlToolkit, Version=1.0.10920.32880, Culture=neutral, PublicKeyToken=28f01b0e84b6d53e” does not contain a script with hash code “9ea3f0e2″.
Stack trace: at AjaxControlToolkit.ToolkitScriptManager.DeserializeScriptEntries(String serializedScriptEntries, Boolean loaded)

at AjaxControlToolkit.ToolkitScriptManager.OnLoad(EventArgs e)
at System.Web.UI.Control.LoadRecursive()
at System.Web.UI.Control.LoadRecursive()
at System.Web.UI.Control.LoadRecursive()
at System.Web.UI.Control.LoadRecursive()
at System.Web.UI.Control.LoadRecursive()
at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint)

The reason is that the request being sent to retrieve the combined javascripts of the control used in the page contains a hash code that is used internally. That hash code is being generated from the script’s name using the default string.GetHashCode function which returns different values for 32 bit and 64 bit processes.

If your 64 bit process creates the javascript include call which contains the 64 bit hash code and the request will eventually reach the 32 bit process/machine you will get this error since the hash value will not be found internally and vice versa.

There is an open issue about this in CodePlex since late January but as of the time of this post, the latest source version in CodePlex doesn’t have a fix for this.

If you are getting the error “Internal .Net Framework Data Provider error 6″ and you are using SQL Server 2005 with database mirroring know that its a bug in the SQL managed client code.

Check out KB article 944099 for more info.

At the time of writing this post the only solution for this problem is a private hot fix that you need to obtain from Microsoft.

The links around the web for the Hotfix Request Web Submission form seems to be broken. I’ve contacted Microsoft and got this replacement link.

Use this link to contact Microsoft, state that you require the hotfix for KB article 944099 and submit the form. You will be contacted shortly by a Microsoft representative that will give you the download details or request further information to better understand your needs.

I’ve just been asked through the “Ask” widget on the side of this blog on Yedda (my day job) this question:

Yedda � People. Sharing. Knowledge.Where is sos.dll for .net 3.5?

I installed Visual Studio 2008, but I don’t see sos.dll in the Microsoft.NET\Framework\v3.5 directory? What’s up with that?

Topics: , ,

Asked by dannyR on January 03, 2008

View the entire discussion on Yedda

To which I replied:

Yedda � People. Sharing. Knowledge.Where is sos.dll for .net 3.5?

There is a bit of confusion as to the .NET Framework versions as opposed to the CLR (Common Language Runtime) versions.

Up until .NET Framework 2.0, the CLR versions advanced side by side with the versions of the framework.

Starting from .NET Framework v3.0, the version of the CLR stated at 2.0 (there was no significant change in the CLR itself) and the framework’s version kept on going forward.

.NET Framework v3.5 actually runs on CLR 2.0 SP1.

This means that you can should be able to use the SOS.dll located at Microsoft.NET\Framework\v2.0.50727 with either Visual Studio (as I’ve explained on this post) or with WinDbg.

Dave Broman of Microsoft has written a post about the mapping of the versions of the .NET Framework to the version of the CLR. Do mind that as Dave states in his post, its the way he figures things up in his head and not the official documentation on this matter.

Topics: , ,

Answered by Eran on January 03, 2008

View the entire discussion on Yedda

I know its a bit confusing and I’m not really sure why Microsoft detached the versions of the Framework from the versions of CLR (well, I can understand some of the thoughts leading to this but I don’t think it was that good of a move) but Dave’s excellent post does put things in perspective and allows figuring out in what version of the CLR you are actually using.