Following this MS article, I have created the following steps:
Step 1: Added the New Pipeline with two parameters with names as sourceSAS and destinationSAS and given their SAS tokens as values
Step 2: Created the Web activity and Named as ListTables:
URL | @concat('https://srcstorage789.table.core.windows.net/Tables', pipeline().parameters.sourceSAS) |
---|---|
Method | GET |
Headers | Accept - application/json |
Step 3: ForEach activity:
Items | @activity('ListTables').outputValue |
---|
Step 4: Web Activity - Named as CreateTable - This is created inside the ForEach activity
| URL | @concat('https://deststorage789.table.core.windows.net/Tables', pipeline().parameters.destinationSAS) | | Method | Post | | Body | @json(concat('{"TableName":"', item().TableName,'"}')) |
Content-Type application/json x-ms-date @utcNow() x-ms-version 2021-02-12 Accept application/json
Step 5: Create Copy Data Activity - Here I didn't understand the article clearly is whether to create the datasets directly or create the Copy Data Activity.
How do I specify the table name dynamically so that it will take all tables from source storage:
How to give the value of the table field so that all source storage tables should be copied to destination table storage account?
Whereas I tried giving the specific table name in both of the datasets in CopyData Activity, it copied successfully but unable to copy all the tables at a time from source to destination storage account..!
To specify multiple tables dynamically, you need to use dataset parameters.
Create a string parameter in your dataset like below.
Now, click the Enter manually checkbox in the dataset and select dynamic content at the table name. Give the created parameter name like below in that.
Now, inside the ForEach, add the dataset to the copy activity and it will show the same parameter. Here, you need to add your table name @item().TableName
dynamic expression like below. For sample, I have taken lookup activity.
If you want to specify the target table dynamically, you need to do the same for the target datasets as well.
Execute the pipeline and it will give the expected results like below.