I am exporting dialogflow interaction logging to bigquery in a table as created as per the structure in the link below
https://cloud.google.com/dialogflow/cx/docs/concept/export-bq
CREATE TABLE <your_dataset_name>.dialogflow_bigquery_export_data(
project_id STRING,
agent_id STRING,
conversation_name STRING,
turn_position INTEGER,
request_time TIMESTAMP,
language_code STRING,
request JSON,
response JSON,
partial_responses JSON,
derived_data JSON,
conversation_signals JSON
);
There are some fields like request,response, derived_data etc are in JSON format.
Wondering what's the best way of analyzing and querying this table? Do I need to flatten the table into another table or multiple tables for each JSON column?
How to achieve data modelling for the given table?
Looking for flattening the data if that's the right approach for analysing table.
JSON datatype was introduced as it is more performant and cost effective. When a column of datatype JSON is created, the data is stored in parsed form and hence you have benefits of a columnar database. You don't have to flatten the table to query the json data. You can easily extract the values/fields of the JSON using dot operator.
For example:
CREATE OR REPLACE TABLE mydataset.table1(id INT64, shopping_cart JSON);
INSERT INTO mydataset.table1 VALUES
(1, JSON """{
"name": "Alice",
"items": [
{"product": "flowers", "price": 10},
{"product": "book", "price": 50}
]
}""");
The following example returns the name field of the shopping_cart column.
SELECT shopping_cart.name
FROM mydataset.table1;