I am trying to write a script file which will export some data using [cbq][1] commands and then import those data into target cluster via [cbimport][2] commands. I want to enhance the script in such a way that it can export huge data and import on another cluster. However in my local machine, it is failing. Actually script is getting stuck in the SELECT command of the cbq command.
Can someone suggest me how to do it. Below is the test script which I am using:
echo "Hello World"
cbq -u Administrator -p Administrator -e "http://localhost:8093";
\REDIRECT temp.txt;
SELECT * FROM `sample.data` where id="106" --output="temp.txt";
\REDIRECT OFF;
cbimport json -c http://{target-cluster}:8091 -u Administrator -p Administrator -b sample.data -d file://C:\Users\myusername\Desktop\temp.txt -f list -g %docId%;
\EXIT;
Below is the response of above script:
$ ./test.sh
Hello World
Connected to : http://localhost:8093/. Type Ctrl-D or \QUIT to exit.
Path to history file for the shell : C:\Users\myuser\.cbq_history
And getting stuck here for very long time.
Specifically for this script of yours, you have a semi-colon terminating the cbq invocation after the URL, so it is simply in interactive mode.
You would want to try:
echo "Hello World"
cbq -u Administrator -p Administrator -e "http://localhost:8093" --output="temp.txt" -s "SELECT * FROM `sample.data` where id='106'"
# add processing to convert from redirected output to cbimport format
cbimport json -c http://{target-cluster}:8091 -u Administrator -p Administrator -b sample.data -d file://C:\Users\myusername\Desktop\temp.txt -f list -g %docId%
as the 3 commands in your script. (Note no use of double quotes in the statement, since the chosen quotes for the shell are double quotes. You could invert this choice too.)