In .NET there is the null
reference, which is used everywhere to denote that an object reference is empty, and then there is the DBNull
, which is used by database drivers (and few others) to denote... pretty much the same thing. Naturally, this creates a lot of confusion and conversion routines have to be churned out, etc.
Why did the original .NET authors decide to make this? To me it makes no sense. Their documentation makes no sense either:
The DBNull class represents a nonexistent value. In a database, for example, a column in a row of a table might not contain any data whatsoever. That is, the column is considered to not exist at all instead of merely not having a value. A DBNull object represents the nonexistent column. Additionally, COM interop uses the DBNull class to distinguish between a VT_NULL variant, which indicates a nonexistent value, and a VT_EMPTY variant, which indicates an unspecified value.
What's this about a "column not existing"? A column exists, it just doesn't have a value for the particular row. If it didn't exist, I'd get an exception trying to access the specific cell, not a DBNull
! I can understand the need to differentiate between VT_NULL
and VT_EMPTY
, but then why not make a COMEmpty
class instead? That would be a much neater fit in the whole .NET framework.
Am I missing something? Can anyone shed some light why DBNull
was invented and what problems it helps to solve?
The point is that in certain situations there is a difference between a database value being null and a .NET Null.
For example. If you using ExecuteScalar (which returns the first column of the first row in the result set) and you get a null back that means that the SQL executed did not return any values. If you get DBNull back it means a value was returned by the SQL and it was NULL. You need to be able to tell the difference.