I am trying to process a data upload where I am trying to publish the messages through PeopleSoft over Integration Broker asynchronously in an Application Engine. The whole point is to be able to send several messages and consume them in the same node. Before I send the messages, I am storing the data on a table (say T1) to store all the field values in the upload file.
While consuming I am trying to expose each message to the Component Interface and the exceptions are logged onto the same table T1. Let's say for each transaction we are flagging the table field (say Processed_flag ='Y').
I need a mechanism where I could just wait for all the asynchronous messages to complete. I am thinking of checking the T1 table, if there are any rows on the T1 table where Processed_flag is 'N', just make the thread sleep for more time. While all the messages are not processed keep it sleeping and don't let the application engine complete.
The only benefit I can get is I don't have to wait for multiple instances at once or does not have to make the synchronous call. The whole idea is to use the component by different transactions ( as if it was used by say 100 people -> 100 transactions ).
Unless those 100 transactions are complete, we will be make sure out T1 table keeps a record of what goes on and off. If something is wrong, it can log the exceptions catched by the CI.
Any comments on this approach would be appreciated. Thanks in advance!
We are taking a different approach. Even if we are able to validate the data on those tables before app engine completes, the whole idea to send the messages asynchronously is of no use. In that case, using synchronous messages would be better and run the processes in parallel.
So, we decided to let the application engine complete and publish all the chunks of data through messages and make sure the messages are completely consumed in the same node.
We will be updating the table T1, for all the processed / successful / failed rows as we keep consuming the messages and use them as needed.
We will keep an audit or counter for all the rows published and consumed. Since exposing the same component to multiple transactions would be a huge performance impact. We want to make sure it performs better, as if say 50 users are updating the same tables behind component using the same CI ( of course different instances ). I will be completing my proof of concept and hopefully it will be much better than running the processes in parallel.
I hope this can help any one who is dealing with these upload issues. Thanks!