Consider the following scenario. I have a table (a stupid_table
) in a schema over which I have no control. It's third party, off limits. No touchy. I can query it, but not add indexes or new tables or change the design.
Every column in the stupid_table
is a VARCHAR2(50 BYTE)
, there are lots of columns but I only need two of them: row_type
and magic_number
. The magic_number
is populated with the string representation of an integer, but only where row_type
is set to 'DATA'
, I only need the magic numbers that are greater than zero.
SELECT TO_NUMBER(magic_number)
FROM stupid_table
WHERE row_type = 'DATA'
AND TO_NUMBER(magic_number) > 0;
This results in an "invalid number" Oracle error, because the Cost Based Optimiser (CBO) is choosing to evaluate the TO_NUMBER
before the checking the row_type
and there's a whole bunch of rows with a different row_type
and a different use for the magic_number
field.
OK, how about if I filter the rows first, and then do the comparison?
SELECT TO_NUMBER(t.magic_number)
FROM (
SELECT magic_number
FROM stupid_table
WHERE row_type = 'DATA'
) t
AND TO_NUMBER(t.magic_number) > 0;
Now the CBO seems to work out that the query is quite simple and ignores the cunning that I have employed, yielding an identical query plan to the original.
Finally, in frustration, I resort to dirty hacks: using the /*+RULE*/
query hint to force Oracle to use the old Rule Based Optimiser. This works like a dream, but it shouldn't be necessary, not to mention it's using a feature of Oracle which is no longer supported.
Is there a better way to do this?
Can you just avoid using TO_NUMBER
altogether? Seems like that would improve performance anyways. Something like:
WHERE t.magic_number != '0'
If there could be negative numbers, or the numbers are floating point, you might need additional checks, but it certainly seems feasible.