Skip to content

Suboptimal ReaderWriterLockSlim usage in TypedHeaderManager #5811

@KalleOlaviNiemitalo

Description

@KalleOlaviNiemitalo

Noticed this while browsing the source code around here:

[SuppressMessage(FxCop.Category.Usage, "CA2301:EmbeddableTypesInContainersRule", MessageId = "cache", Justification = "No need to support type equivalence here.")]
private static TypedHeaderManager GetTypedHeaderManager(Type t)
{
TypedHeaderManager result = null;
bool readerLockHeld = false;
bool writerLockHeld = false;
try
{
try { }
finally
{
s_cacheLock.TryEnterUpgradeableReadLock(Timeout.Infinite);
readerLockHeld = true;
}
if (!s_cache.TryGetValue(t, out result))
{
s_cacheLock.TryEnterWriteLock(Timeout.Infinite);
writerLockHeld = true;
if (!s_cache.TryGetValue(t, out result))
{
result = (TypedHeaderManager)Activator.CreateInstance(s_GenericAdapterType.MakeGenericType(t));
s_cache.Add(t, result);
}
}
}
finally
{
if (writerLockHeld)
{
s_cacheLock.ExitWriteLock();
}
if (readerLockHeld)
{
s_cacheLock.ExitUpgradeableReadLock();
}
}
return result;
}

In System.ServiceModel.TypedHeaderManager, there is a field private static ReaderWriterLockSlim s_cacheLock for synchronising access to private static Dictionary<Type, TypedHeaderManager> s_cache. The way GetTypedHeaderManager(Type t) uses this lock is, it first takes an upgradeable read lock and checks whether s_cache already has an entry for the Type. If not, it takes a write lock and checks again; if the Type still isn't there, then it adds that. Finally, it releases the locks.

The problem with this kind of locking is that, according to the ReaderWriterLockSlim documentation, "Only one thread can enter a lock in upgradeable mode at any given time." So this fancy two-level locking actually works as one-level locking and prevents parallel lookups in s_cache, even if all relevant types are in the cache already. In ReaderWriterLockSlim, the only kind of lock that can be held by multiple threads at the same time is a non-upgradeable read lock, but nothing ever takes that kind of lock on s_cacheLock.

It can be fixed in any of these ways:

  • Option A: Give up and just use one-level locking, via lock (s_cache) or a Lock instance. This change would not improve parallelism, but would simplify the code and possibly avoid a Dictionary lookup.
  • Option B: Instead of taking an upgradeable read lock, take a non-upgradeable read lock. If the Type is not in the cache yet, then release the non-upgradeable read lock, take a write lock, and check again. This change would allow multiple read-only lookups in parallel, which seems to be the original intent of s_cacheLock.
  • Option C: Remove the locking and replace the Dictionary with ConcurrentDictionary. I don't think this is a good idea, because after the steady state has been reached and no more types are being added to s_cache, Dictionary seems likely to be more efficient than ConcurrentDictionary.

(Apart from that, the SuppressMessageAttribute on GetTypedHeaderManager also looks wrong, for these reasons:

  • It refers to "CA2301:EmbeddableTypesInContainersRule" and "type equivalence" but the current CA2301 rule relates to BinaryFormatter, not to using embedded COM interop Type instances as keys in a dictionary.
  • It has MessageId = "cache" but the name of the field is s_cache.)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions