Been searching for a solution for a while now,
go to (1) or (2) to skip description, first i will explain the situation.
My firm have upgraded our erp. system, my primary work is to create lists used by others in the firm, i take all my data from this systems database during upgrade we got some data converted to match the new version, some of it was left behind, some of it was not tampered with and just directly exported to the new database, its on a separate server, basically its a success, the new ERP. system works as supposed, however a lot of my lists have been broken, the data my lists use is missing/partly_missing/all_there
okay so the problem is missing data i need from the old database, okay a union on new and old database should be able to do that, however i do not want duplicate records, "data that was converted to the new database also exist in the old database" therefore two fields could exist "they do, i tried it"
so 2. version of my solution
i am lacking primary keys "iseries database" so i went concatting a combination of feilds making a uniqe key, "takes too long to explain how i did that" however it ends up in me making a view with a union on two databases, making sure no records exist two times,
(1) so this is what i got now, a view of the combination of old and new table data all built with checks on a "uniqe" key.... every time i need data that has been effected of the upgrade i must run a expensive query on each table, "some using these views more than 40 times" (Question1) how can i "cost effective" take data from two different schemas/databases and bind together?
(2) the only thing i can think of giving me this performance is to make indexes instead of these views that i built, however until now i haven't been able to find any information on how to, (Question2) Can i create a index over two tables,
my database is as/400 - iseries however i am interested in a solution up against any database type, i am very flexible with resources
:EDIT: code that is used to create view with slight modification,
SELECT
CTCONO,
CTDIVI,
CTSTCO,
CTSTKY,
CTLNCD,
CTTX40,
CTTX15,
CTPARM,
CTTXID,
CTRGDT,
CTRGTM,
CTLMDT,
CTCHNO,
CTCHID
FROM NEWDB.CSYTAB
UNION
SELECT * FROM OLDDB.CSYTAB
WHERE ( CTCONO,CTDIVI,CTSTCO,CTSTKY,CTLNCD ) NOT IN
(
SELECT A.CTCONO,A.CTDIVI,A.CTSTCO,A.CTSTKY,A.CTLNCD FROM NEWDB.CSYTAB A, OLDDB.CSYTAB B
WHERE A.CTCONO = B.CTCONO
AND A.CTDIVI = B.CTDIVI
AND A.CTSTCO = B.CTSTCO
AND A.CTSTKY = B.CTSTKY
AND A.CTLNCD = B.CTLNCD
)
Make a new table that stores the value of your expensive query, then if you're going to always ignore the older data in general if there's a record in the new DB for it. Then just add in some triggers to update this new table when the other tables get updated.
Perhaps, a better question would be to provide your schema, and current expensive query then ask for people to help make it faster.
Edit: now you have posted your table I see one thing you could improve, make the second part of your query this:
...
UNION
SELECT * FROM OLDDB.CSYTAB B
WHERE NOT EXISTS(
SELECT TOP 1 1
FROM NEWDB.CSYTAB A
WHERE A.CTCONO = B.CTCONO
AND A.CTDIVI = B.CTDIVI
AND A.CTSTCO = B.CTSTCO
AND A.CTSTKY = B.CTSTKY
AND A.CTLNCD = B.CTLNCD
)
Then provided you have a single index that spans { CTCONO,CTDIVI,CTSTCO,CTSTKY,CTLNCD } in the NEWDB.CSYTAB then it should be much better performance than what you're getting currently.