Archive for the 'Managed Debugging' Category

I usualy try to stay away from posts that have only links to other blog posts but, Tess has a great post about various issues causing .NET memory leaks.
I’ve talked about these issues before here and here and here.

It’s nice to see that others from Microsoft are sharing their knowledge and expertise on the subject of debugging with the rest of the world (specifically, when it comes to production debugging, hard .NET debugging problems and WinDbg 🙂 ).

Enjoy!

I’ve discovered something that might be a bug in some way.

In one of our web applications, one of the developers wrote a user control (ASCX). The ASCX file contained a DIV with the “runat=server” attribute named (just of the sake of the sample) “X”. The ASCX.cs file contained another property called “x” (mind the case change!).

Since the compiler is case sensitive, there was no problem with this and the whole thing compiled and run, specifically it run from Visual Studio while debugging, the problem started when we precompiled the application.

In our build process, we precompile the web applications using the ASP.NET Compiler (through MSBuild, of course 😉 ) and since there was no compilation problem, everything went fine and the build succeeded.

We put the new build in a dev machine and started working with it. When we reached a page that used that control we got the “Ambiguous match found” exception. After searching for it in the newsgroups I found that it is a reflection related problem, so I fired up my faithful WinDbg and dived in.

I attached to the ASP.NET worker process and it broke in when the exception was thrown. I looked at the call stack (using !clrstack) and saw a call to “System.RuntimeType.GetField”, a reflection function used to get a field of an object.

OS Thread Id: 0xf38 (14)
ESP EIP
019eecb0 77e55dea [HelperMethodFrame: 019eecb0]
019eed54 79617ef1 System.RuntimeType.GetField(System.String, System.Reflection.BindingFlags
.
.
.

I run “dds 019eed54” (this address is the address of the stack frame) to see what’s on the stack and get the addresses/values of the two parameters in the function.

019eed54 0263588c
019eed58 00000001

The first two lines (shown above) contains the addresses/values (depending on the type and number of parameters passed to the function, in our case we have only two) of the function. The second column contains the addresses/values. In our case, since the first parameter is a string, the value 0263588c is actually the address of a string, so I used !dumpobject to see its value and saw that it was “x”.

The second parameter is an enumeration, so the value represents the integer value of that enum flag. I looked in Reflector and saw that the value 1 belongs to BindingFlags.IgnoreCase.

At that point I still didn’t figure out the problem, so I went back to reflector and took a look at the precompiled assembly that this code was compiled into and then saw that I had two protected fields, one named “X” and one named “x”.

Bingo! that’s the problem, staring me back from the screen.

To sum things up, while this code is correct compiler-wise (two parameters with the same characters but different casing are two different parameters), the implementation that tries to access some of it uses a case insensitive reflection search, which causes the bug.

Until I get more information from Microsoft, I suggest making sure in any ASP.NET application that is precompiled to avoid having the same names with different casing to variables in the ASPX/ASCX page and its code behind.

Did you know that you can use SOS from within Visual Studio 2005, not just from WinDbg?
It has been previously discussed in a couple of places but most people don’t know it is possible to do that.

“Why should I use the SOS extension from within Visual Studio?”, you are probably asking.
The answer is quite simple.

Visual Studio is a rich debugger that shows us most of the information we want to know but it doesn’t show us a few important things like the actual size of an object and all of its references and who is referencing our object. While these things are not straight forward in Visual Studio, they are very easy to accomplish in SOS. For example, you can run the following command in SOS:

  • !objsize – to find out the actual size of the object and all of its referenced objects
  • !gcroot – to find who is referencing our object
  • !dumpheap – to see what’s all the stuff that we have on the heap. By using the -gen [generation number] parameter with !dumpheap, we can see all the objects that are currently in that generation
  • !SyncBlk – find out which managed locks are locked at the moment (I’ve talked about it before)

So, how do we actually do that?

Well, the first thing you need to do is enable “unmanaged debugging” by going to the “Project Properties” -> “Debugging” -> “Enable Unmanaged Debugging”.

After doing that, you can stop somewhere in your program, open the “Immediate” window (Debug -> Windows -> Immediate) and type:

.load sos

This will load the SOS extension. From there on you have most of the commands of the SOS extension at your disposal.
You might need to set the symbols path by calling .sympath, just like you would do in WinDbg.

An added bonus to this whole thing is that it also works on Visual Studio 2003, so you get to use it even with .NET Framework 1.1.

Say you have an ASP.NET application with some code written in the ASPX pages themselves (I know its so ASP, but it happens sometimes 😉 ).
Say you need to update some of this code and change it, or you just want to extend an existing web application with some ASPX pages and you don’t want to get into a code-behind adventure.
Say that some of the code is giving you problems and you really want to debug it in a debugger and its not even a production environment and you even have Visual Studio installed.

What do you do?

System.Diagnostics.Debugger.Break();

That’s all.

This little line will pop up the dialog that you usually see when an error occured that is not handled and if you have a debugger installed, Windows will offer you to debug the problem.

Then you can simply select to open the debugger and debug your code.

Do mind that you’ll need to have the debug=”true” flag in your web.config (otherwise debug symbols will not be generated).

Easy, isnt’ it?! 🙂

Following some of the samples posted in Mike Stall’s blog and after actually needing something like this at work, I’ve decided to write a small program called ExceptionDbg which uses MDbg’s APIs and tracks all exceptions being thrown in an application (including those that are being swallowed by try..catch blocks).

We had a case of exceptions being thrown in the finalization code (this is C++ code that has destructors and therefore has finalizers. I’ve talked about C++ destructor in this previous post).

ExceptionDbg let you attach to a process and dump all of the exceptions being thrown live to your console and/or log file.

There are a few filters on the exception types being logged including:

  • Show only exceptions being caught by try..catch blocks
  • Show full Call Stack and if debug symbols are present, show line numbers as well as source file names.

So what is it good for, you ask?

  • Check exceptions in the finalizer thread – These exceptions are ALWAYS being caught so the finalizer thread will not die. It can mean that if you have some code that frees some unmanaged resources they will not get freed and this is a potential memory leak.
  • Got a performance problem? Check how what are all the exceptions being thrown and caught (try..catch blocks) to verify that some pieces of code doesn’t use exception throwing as a logic to handle things other than failure.

As you can see, it’s useful for a number of scenarios and it help me a lot in finding some of these problems.

You can get the binaries here and the source code here.

You need the .NET Framework 2.0 for this which can be downloaded from here.
If you want to compile it you’ll need the .NET Framework 2.0 SDK which you can download from here.

Enjoy and send me feedback!

I was working today on figuring out some Managed C++ code with a friend from work (Oran Singer) and I just wanted to outline some of the oddities of the Managed C++ Destructors because I think it is important for everyone to understand how they work and what some of the quirks about them are.

We are still in the middle of figuring out something more serious (I will post on that later on).

I would like to “appologize” to the C#/VB.NET crowed in advance since this post is all about Managed C++ today a bit on its future, but I think it will be interesting none-the-less.

First of all, a Managed C++ class that had a destructor will include two additional methods due to the destructor:

  1. Finalize method
  2. __dtor method

Currently you cannot implement the finalize method in managed C++ directly like in C# by simply override it since it is defined as private protected in System::Object so the only way to do it is to add a destructor to the class.

This means that a Managed C++ class that has a destructor has a finalizer with all the implications of a .NET finalizer (I’ve written in more detail about finalizers here).

The __dtor is being called when you use the “delete” operator on a managed C++ class instance (IMPORTANT NOTE: You can only call “delete” on a managed C++ class instance that implements a destructor).

The implementation of the __dtor method is similar to an implementation one would put in a Dispose method when implementing the IDisposable interface. It has two function calls, one to GC.SuppressFinalize (so the finalizer thread will ignore calling the Finalize method of this instance) and a call to Finalize (to actually perform the destructor code).

Another interesting anecdote is that if you have a pure virtual class and you add a constructor to it, it must have a body even though this is a pure virtual class since there is always a virtual call to the base class’s destructor.

Looking a bit to the future, in Visual Studio 2005 (previously known as Whidbey) Visual C++ that generates managed code get a few new features that are a bit different in implementation than what we saw above.

First of all, when implementing only a destructor (“~MyClass()”) the compiler will actually fully implement the IDisposable interface and in certain cases will automatically call the Dispose method (in C# you would either use try..finally block and call the Dispose at the finally part or use the “using” keyword) in the following cases:

  • A stack based object goes out of scope
  • A class member’s enclosing object is destroyed
  • A delete is performed on the pointer or handle.

In addition to that you can explicitly implement a finalizer using the “!” tag (“!MyClass()”). That finalizer is subject to that usual finalizer rules we already know. The main advantage is that it will NOT run if you already called the destructor.

These new features allows improved control for a more deterministic cleanup and allows finer control on C++ managed objects that was previously not available.

I would like to thank Oran again for his help, mainly for forcing me to search some more on the subject and getting the idea to write this post.

Remeber a few posts ago that I talked about CorDbg, the managed debugger?
According to this post in Mike Stall’s blog (a fine blog, btw) its out of the SDK.

CorDbg was the first managed debugger that was written. It was written in native code since Interop in those days was still being written and was very unstable.
CorDbg also lacks features that are common in WinDbg such as extending it with extensions and instrumeting with it in a nice way.

Its hard (or even impossible according to this at Mike Stall’s blog) to implement a managed debugger in managed code for v1.1 (although I know there are a few people messing around with it as we speak, me included 😉 ). That is why CorDbg is the only viable managed debugger for production debugging for v1.1

To fill all of these gaps and to make things a lot more easier and more powerful here comes MDbg (ta-da).

MDbg is a managed deubugger fully written in C#. Its extendable by writing managed code to extend it, can be used to instrument code and is just so damn cool 😉

It gives you all the things you ever wanted from a production debugger and can also debug both .NET 1.1 and 2.0 code (to use it to debug 1.1 code you will still have to have the .NET 2.0 Runtime installed).

You can find out more information about MDbg here.
I am in the process of collecting some more information for a more elaborate post on MDbg and some of its features (probably a series of posts), so stay tuned.

WinDbg is a native debugger and although using it combined with SOS to perform managed debugging is great, it sometimes lack the finesse that only a managed debugger can give.

CorDbg is a managed debugger that ships with the .NET Framework SDK. It was not written in managed code since it was the first managed debugger written in the early days of .NET when you couldn’t yet rely on managed code to produce a debugger that will debug it.

CorDbg knows and understand .NET better than WinDbg+SOS can. That is why you can better debug .NET applications with it.
For example, A simple scenario that is hard to do with WinDbg+SOS and very easy to do in CorDbg is to break only when specific .NET exceptions happen.
When I say specific I mean SPECIFIC, i.e., I can specifically specify that I want to ignore System.ThreadAbortException but not System.FileNotFoundException.

To do that all I need to do is run the following commands:

catch exception System.FileNotFoundException
ignore exception System.ThreadAbortException

WinDbg can only break (without adding sophisticated scripts to further analyze an exception) on CLR exceptions since all CLR exception has the same exception code.

Some more cool and useful things CorDbg can do includes:

  • Easily set a breakpoint in managed code (and we all know its not that nice to set a managed breakpoint in WinDbg using SOS) using the b[reak] command:b foo.cs:42 – will break in file foo.cs in line 42.
    b MyClass::MyFunction – will break at the start of MyClass::MyFunction
  • Disassembling to native or IL code using the dis[assemble] command.
  • Evaluate a function and using the f[unceval] command:f MyNamespace.MyClass::MyMethod – for static method
    f MyClass::MyMethod param1 param2 $result – for instance method. The result is stored in $result for further evaluations
  • Printing variables to see their content using their names (for local variables, only if debug symbols are available) using the p[rint] command instead of finding out their addresses to print them in WinDbg:
    p obj1.var1 – for variables inside objects
    p MyClass::MyStaticVar – for static variables
    p arr[2] – for print a specific cell in an array
  • Changing a variable value using the set command.
  • Setting the next line of execution by simply invoking setip [line number] (like you do with right-clicking in Visual Studio .NET)

As you can see, using a managed debugger like CorDbg can be a lot nicer and easier instead of WinDbg.

I heared unsubstantiated rumores that when Microsoft developers have problems with a certain build of Visual Studio they use CorDbg to figure out the problem or to debug instead of finding what the problem with Visual Studio is.

Visual Studio .NET 2005 will contain a new managed debugger that was written in managed code called MDbg. You can get it here (it required .NET Framework 2.0). MDbg has an extensions API so you can extened it just as you can extened WinDbg (but its even easier). It also features a nice simple GUI for those of you who doesn’t like to use the keyboard too much.

You can read more about MDbg in Mike Stall’s blog

I will dedicate another post to MDbg so stay tuned.

The Monitor class in .NET is a pure managed implementation of a Critical Section. It is used to synchronize multiple threads’ access to a code block.

C#’s lock statement and VB.NET’s SyncLock statement are actually evaluated into a Try..Finally block that has Monitor.Enter at the start and Monitor.Exit inside the Finally block.

For example:

[C#]
lock(myObject)
{
// Some code requiring synchronization
}

[VB.NET]
SyncLock MyObject
' Some code requiring synchronization
End SyncLock


The actual code produced is:

[C#]
Monitor.Enter(myObject);
try
{
// Some code requiring synchronization
}
finally
{
Monitor.Exit(myObject);
}

[VB.NET]
Monitor.Enter MyObject
Try
' Some code requiring synchronization
Finally
Monitor.Exit MyObject
End Try

There are two major issues that should be considered when using the Monitor class (or C#’s Lock and VB.NET’s SyncLock):

  1. Monitor only works on Reference objects and NOT value types. This means you cannot use an Integer as the object to lock when entering a monitor because it wil be boxed and releaed right away, effectivly not synchronizing the code.
  2. Locking should always be performed on private or internal objects.

There are a great deal of samples floating around that locks the “this” (“Me” in VB.NET) object, which means you are using the current instance as the object that is used for locking.

If this object is a public one and is used or can be used by others (either inside your organization or outside, if you work in an organization to begin with 😉 ), if you lock this instance and that instance is also available to someone else, they might also use it for locking and essentially create a potential deadlock.

So, what should we do about that?

ALWAYS USE PRIVATE OR INTERNAL OBJECTS FOR LOCKS!

The best way to avoid a deadlock is to ALWAYS use private object for locking.

For example:

[C#]
public class LockClass
{
private static object myLock = new Object()
public function void DoIt()
{
lock(myLock)
{
// Some code block that is in
// need of a sync
}
}
}

[VB.NET]
Public Class LockClass
Private Shared MyLock As Object = New Object()

Public Sub DoIt()
SyncLock MyLock
' Some code block that is in
' need of a sync
End SyncLock
End Sub
End Class

Remeber these two guidelines mentioned above and you will save yourself a lot of time trying to find an elusive managed deadlock that is usually hard to reproduce.

From day 1, .NET presented an easy to use ThreadPool that will handle all of your queuing/dispatching needs.

While ThreadPool is a cool thing that saves you a lot of time dealing with multi threaded issues, one must be careful when using it and understand how it works and who else is using it to avoid common pitfalls.

Who Uses the ThreadPool?

  • Any asynchronous delegate invocation (BeginInvoke/EndInvoke pattern)
  • HttpWebRequest uses it internally to queue and dispatch web requests.
  • WebRequest uses it to dispatch the request
  • Some timers (like System.Threading.Timer).
  • The end user that uses Thread Pool directly.

The Problem

Consider a situation in which you have one of the above filling up the Thread Pool with queued items. Each queue item was implemeneted in a way that also uses the ThreadPool (either directly or by an async invocation). We then reach to a point where the ThreadPool is full and all of its requests are blocking and waiting to get some work done using the ThreadPool.

Bang! You are deadlocked! You have circular dependency that code that runs inside a ThreadPool worker thread is blocking because it runs a code that also needs the ThreadPool, but the ThreadPool is already full and it is also waiting to get a worker Thread to run on to of it.

Reaching this situation is quite easy under high load since by default, the ThreadPool is set to have 25 threads per CPU (A Hyper Threaded CPU is considered as two CPUs by .NET and the Operating System and therefore a ThreadPool running on that CPU will have 50 threads, 25 per CPU).

The Solution

There is actually more than one way of avoiding this situation:

  • Query the ThreadPool using ThreadPool.GetAvailableThreads to see if you should queue the item or not. If you can’t, either throw an exception or wait until you can use it (just don’t wait if you are on a Thread Pool worker thread, i.e. you are running inside an ASP.NET page 😉 We are trying to avoid that). This is a similar approach that .NET takes internally.
  • Avoidance – Don’t call this code from within a place that uses the ThreadPool such as from within ASP.NET or from within an Async call.
  • Implement your own ThreadPool instead of using .NET’s ThreadPool – This is not as easy as it sounds, though.

There is an additional solution but its ONLY applicable in the coming .NET 2.0 which is to specifically call ThreadPool.SetMaxThreads. By doing so you will be able to increase the amount of threads available to the ThreadPool.

NOTE: Do mind that having too many threads per CPU is not recommended since the Operating System will spend a lot of time just making sure they are not starved.

As we can see, ThreadPool is a great thing that saves us the time and effort of implementing a multi threaded pool with all of its synchronization and other thread related issues but we still must understand that it has its limitation and we need to understand how it works and what it do to avoid unnecessary deadlocks and/or strange behavior.