I have a table with details of data transfers. One of the fields is an IP associated with the transfer. I need to develop a query which will get me a subset of the rows in the table matching one of 79 IP's. There are 608 distinct IP's in the table.
I have a file which has the required IP's separated by newlines. Is there a way to develop a query which reads this file to get the required rows instead of me manually entering each of the IP's separated by an OR
?
If you have the text with IPs separated by newlines in the database or your client, this query would do the job:
Transform the list to an array, unnest it and join to the main table:
SELECT *
FROM (SELECT unnest(string_to_array(your_list_of_ips, E'\n')) AS ip) sub
JOIN data_transfers d USING (ip);
More about the used function in the manual here.
COPY
To import from a file directly, you could use COPY
. The data file has to be on the same machine as Postgres and you need to be a database superuser for this.
This time we already have a single IP per row:
CREATE TEMP TABLE tmp(ip text);
COPY tmp FROM '/path/to/file';
SELECT *
FROM tmp
JOIN data_transfers d USING (ip);
\copy
If your file is on a different machine or if you do not have superuser privileges, use the (mostly) equivalent \copy
of psql instead. To do it from the bash (like requested in the comment):
psql dbname
dbname=# \set ips `cat ips.txt`
dbname=# SELECT *
dbname-# FROM (SELECT unnest(string_to_array(:'ips', E'\n')) AS ip) sub
dbname-# JOIN data_transfers d USING (ip);
\set
is the psql meta-command to set a variable - to the contents of a file in this case.
ips.txt
being your file with IPs.
:'ips'
is the syntax for single-quoted SQL interpolation.