In a Postgres DB, I need to filter a set of several hundred thousand rows in a table A by including only those rows for which an IP address column (of type inet) in the row matches any of several thousand IP address blocks (of type cidr) in another table B. I've tried various indexes on the inet addresses in the first table and the cidr ranges in the second, but no matter what I do, the planner does a nested sequential scan, applying the << operator to every pair of IP addresses and prefixes.
Is there a way to speed this up with indexes or other clever tricks? (I can resort to external procedural scripting, but I was wondering if it's doable within Postgres.)
Thanks!
Case closed. To make things fast, do the following:
Use the ip4r types available from http://pgfoundry.org/projects/ip4r, as pointed out by user bma. This type supports indexing where Postgres's (up to Postgres 9.3) native ones don't.
Do not use the ip4r type directly, but expand it into lower and upper values as suggested by user caskey and mentioned in the ip4r docs: https://github.com/petere/ip4r-cvs/blob/master/README.ip4r#L187
Given the above, if you're using type ip4 (assuming you're dealing with v4 addresses) for all compared addresses, then the planner will leverage indexes on those columns.
Thanks for the help, guys!