hi I have searched through the various posts and answers regarding a perpetually blocked finalizer thread. Several seem useful, but I am needing additional clarity.
In my application API calls are coming from a Web Service to an NT Service. I am working in a test environment and can reproduce this "effect" over and over. In my test I will follow this general procedure:
I'll follow this general procedure and alter the test to check various API call to see what those API's do to the memory profile of the application.
As I was going through this, I noticed that occasionally when I stop the load test, I will get a "thread was being aborted" exception. This will also rarely be seen in the logs from a customer environment. That seems to happen when an API call is made, and then the client is disconnected for whatever reasons.
However, after getting this exception, I am noticing that the finalizer of my process is hung. (The NT class service, not w3wp.)
Regardless of how long the process is left unburdened and ununsed, this is the top of the output from !finalizequeue:
0:002> !finalizequeue
SyncBlocks to be cleaned up: 0
Free-Threaded Interfaces to be released: 0
MTA Interfaces to be released: 0
STA Interfaces to be released: 0
----------------------------------
generation 0 has 15 finalizable objects (0c4b3db8->0c4b3df4)
generation 1 has 9 finalizable objects (0c4b3d94->0c4b3db8)
generation 2 has 771 finalizable objects (0c4b3188->0c4b3d94)
Ready for finalization 178 objects (0c4b3df4->0c4b40bc)
I can add to the objects "Ready For Finalization" by making API calls, but the Finalizer thread never seems to move on and empty the list.
The finalizer thread shown above is thread 002. Here is the call-stack:
0:002> !clrstack
OS Thread Id: 0x29bc (2)
Child SP IP Call Site
03eaf790 77b4f29c [DebuggerU2MCatchHandlerFrame: 03eaf790]
0:002> kb 2000
# ChildEBP RetAddr Args to Child
00 03eae910 773a2080 00000001 03eaead8 00000001 ntdll!NtWaitForMultipleObjects+0xc
01 03eaeaa4 77047845 00000001 03eaead8 00000000 KERNELBASE!WaitForMultipleObjectsEx+0xf0
02 03eaeaf0 770475f5 05b72fb8 00000000 ffffffff combase!MTAThreadWaitForCall+0xd5 [d:\rs1\onecore\com\combase\dcomrem\channelb.cxx @ 7290]
03 03eaeb24 77018457 03eaee5c 042c15b8 05b72fb8 combase!MTAThreadDispatchCrossApartmentCall+0xb5 [d:\rs1\onecore\com\combase\dcomrem\chancont.cxx @ 227]
04 (Inline) -------- -------- -------- -------- combase!CSyncClientCall::SwitchAptAndDispatchCall+0x38a [d:\rs1\onecore\com\combase\dcomrem\channelb.cxx @ 6050]
05 03eaec40 76fbe16b 03eaee5c 03eaee34 03eaee5c combase!CSyncClientCall::SendReceive2+0x457 [d:\rs1\onecore\com\combase\dcomrem\channelb.cxx @ 5764]
06 (Inline) -------- -------- -------- -------- combase!SyncClientCallRetryContext::SendReceiveWithRetry+0x29 [d:\rs1\onecore\com\combase\dcomrem\callctrl.cxx @ 1734]
07 (Inline) -------- -------- -------- -------- combase!CSyncClientCall::SendReceiveInRetryContext+0x29 [d:\rs1\onecore\com\combase\dcomrem\callctrl.cxx @ 632]
08 03eaec9c 77017daa 05b72fb8 03eaee5c 03eaee34 combase!DefaultSendReceive+0x8b [d:\rs1\onecore\com\combase\dcomrem\callctrl.cxx @ 590]
09 03eaee10 76f72fa5 03eaee5c 03eaee34 03eaf3d0 combase!CSyncClientCall::SendReceive+0x68a [d:\rs1\onecore\com\combase\dcomrem\ctxchnl.cxx @ 767]
0a (Inline) -------- -------- -------- -------- combase!CClientChannel::SendReceive+0x7c [d:\rs1\onecore\com\combase\dcomrem\ctxchnl.cxx @ 702]
0b 03eaee3c 76e15eea 05b59e04 03eaef48 17147876 combase!NdrExtpProxySendReceive+0xd5 [d:\rs1\onecore\com\combase\ndr\ndrole\proxy.cxx @ 1965]
0c 03eaf358 76f73b30 76f5bd70 76f84096 03eaf390 rpcrt4!NdrClientCall2+0x53a
0d 03eaf378 7706313f 03eaf390 00000008 03eaf420 combase!ObjectStublessClient+0x70 [d:\rs1\onecore\com\combase\ndr\ndrole\i386\stblsclt.cxx @ 217]
0e 03eaf388 77026d85 05b59e04 03eaf3d0 011d0940 combase!ObjectStubless+0xf [d:\rs1\onecore\com\combase\ndr\ndrole\i386\stubless.asm @ 171]
0f 03eaf420 77026f30 011d0930 73b1a9e0 03eaf4e4 combase!CObjectContext::InternalContextCallback+0x255 [d:\rs1\onecore\com\combase\dcomrem\context.cxx @ 4401]
10 03eaf474 73b1a88b 011d0940 73b1a9e0 03eaf4e4 combase!CObjectContext::ContextCallback+0xc0 [d:\rs1\onecore\com\combase\dcomrem\context.cxx @ 4305]
11 03eaf574 73b1a962 73b689d0 03eaf60c b42b9326 clr!CtxEntry::EnterContext+0x252
12 03eaf5ac 73b1a9a3 73b689d0 03eaf60c 00000000 clr!RCW::EnterContext+0x3a
13 03eaf5d0 73b1eed3 03eaf60c b42b936a 740ecf60 clr!RCWCleanupList::ReleaseRCWListInCorrectCtx+0xbc
14 03eaf62c 73b6118f b42b90f6 03eaf790 00000000 clr!RCWCleanupList::CleanupAllWrappers+0x119
15 03eaf67c 73b61568 03eaf790 73b60f00 00000001 clr!SyncBlockCache::CleanupSyncBlocks+0xd0
16 03eaf68c 73b60fa9 b42b9012 03eaf790 73b60f00 clr!Thread::DoExtraWorkForFinalizer+0x75
17 03eaf6bc 73a7b4c9 03eaf7dc 011a6248 03eaf7dc clr!FinalizerThread::FinalizerThreadWorker+0xba
18 03eaf6d0 73a7b533 b42b91fe 03eaf7dc 00000000 clr!ManagedThreadBase_DispatchInner+0x71
19 03eaf774 73a7b600 b42b915a 73b7a760 73b60f00 clr!ManagedThreadBase_DispatchMiddle+0x7e
1a 03eaf7d0 73b7a758 00000001 00000000 011b3120 clr!ManagedThreadBase_DispatchOuter+0x5b
1b 03eaf7f8 73b7a81f b42b9ebe 73b7a760 00000000 clr!ManagedThreadBase::FinalizerBase+0x33
1c 03eaf834 73af15a1 00000000 00000000 00000000 clr!FinalizerThread::FinalizerThreadStart+0xd4
1d 03eaf8d0 753c62c4 011ae320 753c62a0 c455bdb0 clr!Thread::intermediateThreadProc+0x55
1e 03eaf8e4 77b41f69 011ae320 9d323ee5 00000000 kernel32!BaseThreadInitThunk+0x24
1f 03eaf92c 77b41f34 ffffffff 77b6361e 00000000 ntdll!__RtlUserThreadStart+0x2f
20 03eaf93c 00000000 73af1550 011ae320 00000000 ntdll!_RtlUserThreadStart+0x1b
By repeating this several times, I have correlated getting the "thread was being aborted" exception, and getting a hung finalizer with this call-stack each time. Can anyone provide a clarification of what the finalizer is waiting on, and anything else related that might help identify a solution for this?
Thanks? Feel free to ask question if you need more information.
Edited Additions follow: My supervisor sent me the code for System.ComponentModel.Component after reading the analysis by Hans, and I need to try and get clarification on a certain point. (Even if the code that I was sent was the wrong version somehow.)
Here is the finalizer for System.ComponentModel.Component:
~Component() {
Dispose(false);
}
Here is Dispose(bool):
protected virtual void Dispose(bool disposing) {
if (disposing) {
lock(this) {
if (site != null && site.Container != null) {
site.Container.Remove(this);
}
if (events != null) {
EventHandler handler = (EventHandler)events[EventDisposed];
if (handler != null) handler(this, EventArgs.Empty);
}
}
}
}
Hopefully I am not making a dumb blunder here, but if the finalizer thread only calls Dispose(false), then it cannot possibly block on anything. If that is true, then am I barking up the wrong tree? Should I be looking for some other class?
How can I use the dump file that I have to determine the actual object type that the finalizer is hanging on?
2nd Edit: I ran my load test, and monitored finalization survivors until I started seeing it rise upward. I then stopped the load test. The result was that finalization survivors were "stuck" at 88 even if I caused a GC a few times, and never progressed downward. I resetiis and restarted SQL Server, but that statistic remained stuck at 88.
I took a dump of the process, and captured the finalizequeue output, and while I was puzzling about that, I noticed that perfmon suddenly registered a drop in finalization survivors about 15 minutes after I took the dump.
I took another dump, and recaptured the finalizequeue output and compared it to the previous. Most of it was the same with the following differences:
B A Diff
4 0 -4 System.Transactions.SafeIUnknown
6 0 -6 OurCompany.Xpedite.LogOutResponseContent
20 0 -20 System.Net.Sockets.OverlappedCache
8 2 -6 System.Security.SafeBSTRHandle
21 3 -18 System.Net.SafeNativeOverlapped
6 4 -2 Microsoft.Win32.SafeHandles.SafeFileHandle
8 9 1 System.Threading.ThreadPoolWorkQueueThreadLocals
19 1 -18 System.Net.Sockets.OverlappedAsyncResult
7 2 -5 System.Data.SqlClient.SqlDataAdapter
7 2 -5 System.Data.DataSet
13 7 -6 OurCompany.Services.Messaging.Message
79 13 -66 Microsoft.Win32.SafeHandles.SafeAccessTokenHandle
6 4 -2 System.IO.FileStream
24 3 -21 System.Data.SqlClient.SqlConnection
40 22 -18 Microsoft.Win32.SafeHandles.SafeWaitHandle
17 3 -14 System.Data.SqlClient.SqlCommand
14 4 -10 System.Data.DataColumn
7 2 -5 System.Data.DataTable
21 20 -1 System.Threading.Thread
73 68 -5 System.Threading.ReaderWriterLock
9/16/2022,Additional Information leading to a cause and possible solution:
By examining the call-stack of the finalizer thread, frame 3's variables have an entry called the pOXIDEntry, which contains the thread number of the target thread that the finalizer is waiting on. To illustrate:
In the above screenshot you can see the finalizer is thread 2 (OSID 382c). In the vars passed to MTAThreadDispatchCrossApartmentCall, it is targeting thread 5 (OSID 2e30). You will also note that this target thread is marked (mysteriously) as being an STA, rather than an MTA as all of the other threadpool workers are.
When a COM object that is created on an STA thread is cleaned up, marshalling is done to the target thread as mentioned here: https://learn.microsoft.com/en-us/troubleshoot/windows/win32/descriptions-workings-ole-threading-models
The problem in this case is that the code has erroniously set Thread.CurrentThread.ApartmentState = ApartmentState.STA for some unknown reason (this is under investigation).
When that line of code is executed, the thread is indeed marked as being an STA, but it does not automatically get a message pump to process messages that are sent to it. As such it will never receive a message sent to it, and hence never reply.
The issue of the blocked finalizer does not always happen, because usually, all COM objects are explicitly disposed on the API thread that created it. So in virtually all of the normal processing cases, the Finalizer thread is not involved in the cleanup of these COM objects.
The one rare exception is when a Thread.Abort occurs on one of these threads, which can be imposed on the thread by the code is the WSE2 client library. Depending on exactly when this happens, a COM object can be left for the Finalizer to clean up. It is only in this case that the Finalizer will see this object, and see that it was created on an STA thread and attempt to marshal the call to clean it up. When that happens, the worker thread that was erroniously marked as STA will never reply, and hence the finalizer will be blocked at that point.
This issue is now resolved.
Ultimately, the source of this issue is that calls were being made in one of our Win32 client libraries to CoInitialize using the STA model. This potential issue was discovered prior to 2006 by Microsoft, and deeply detailed in a blog post by Microsoft Principal Developer Tess Ferrandez at this address: https://www.tessferrandez.com/blog/2006/03/26/net-memory-leak-unblock-my-finalizer.html
As detailed in the post, COM can flip a ThreadPool worker thread over to the STA model, but threadpool threads were not designed to support this thread model.
The application worked most of the time because in all cases the COM objects were forcefully disposed on a API thread that created them and the finalizer was not involved in the cleanup. The rare exception was when the WSE2 client library enforced a Thread.Abort on one of our API threads and that chanced to intrude on the execution of the thread at just such a moment that made a forceful dispose impossible, and in those cases the finalizer was involved.
The fix for this issue was to search throughout our client library for calls to CoInitialize(nil), and convert these calls, and the underlying code to use the multithreaded model. Essentially change these to CoInitializeEx(nil, COINIT_MULTITHREADED);
After initializing to the MTA version of COM, there was no longer any need for the finalizer to do any marshalling in these cases of Thread.Abort, and the issue was resolved.