I am following Druid SQL based ingestion API https://druid.apache.org/docs/latest/api-reference/sql-ingestion-api#submit-a-query
In the reference python code the data is loaded from an external url and then loaded into Druid
import json
import requests
# Make sure you replace `your-instance`, and `port` with the values for your deployment.
url = "https://<your-instance>:<port>/druid/v2/sql/task/"
payload = json.dumps({
"query": "INSERT INTO wikipedia\nSELECT\n TIME_PARSE(\"timestamp\") AS __time,\n *\nFROM TABLE(\n EXTERN(\n '{\"type\": \"http\", \"uris\": [\"https://druid.apache.org/data/wikipedia.json.gz\"]}',\n '{\"type\": \"json\"}',\n '[{\"name\": \"added\", \"type\": \"long\"}, {\"name\": \"channel\", \"type\": \"string\"}, {\"name\": \"cityName\", \"type\": \"string\"}, {\"name\": \"comment\", \"type\": \"string\"}, {\"name\": \"commentLength\", \"type\": \"long\"}, {\"name\": \"countryIsoCode\", \"type\": \"string\"}, {\"name\": \"countryName\", \"type\": \"string\"}, {\"name\": \"deleted\", \"type\": \"long\"}, {\"name\": \"delta\", \"type\": \"long\"}, {\"name\": \"deltaBucket\", \"type\": \"string\"}, {\"name\": \"diffUrl\", \"type\": \"string\"}, {\"name\": \"flags\", \"type\": \"string\"}, {\"name\": \"isAnonymous\", \"type\": \"string\"}, {\"name\": \"isMinor\", \"type\": \"string\"}, {\"name\": \"isNew\", \"type\": \"string\"}, {\"name\": \"isRobot\", \"type\": \"string\"}, {\"name\": \"isUnpatrolled\", \"type\": \"string\"}, {\"name\": \"metroCode\", \"type\": \"string\"}, {\"name\": \"namespace\", \"type\": \"string\"}, {\"name\": \"page\", \"type\": \"string\"}, {\"name\": \"regionIsoCode\", \"type\": \"string\"}, {\"name\": \"regionName\", \"type\": \"string\"}, {\"name\": \"timestamp\", \"type\": \"string\"}, {\"name\": \"user\", \"type\": \"string\"}]'\n )\n)\nPARTITIONED BY DAY",
"context": {
"maxNumTasks": 3
}
})
headers = {
'Content-Type': 'application/json'
}
response = requests.post(url, headers=headers, data=payload, auth=('USER', 'PASSWORD'))
print(response.text)
Is there a way to use Insert statement to directly ingest a list of dictionary into druid directly ?
data = [
{'a': 1, 'b':2}, {'a': 3, 'b': 4}
]
Apache Druid has the concept of "LOOKUP" tables, which are key-value pair tables, distributed to processes ahead of time. They are created using API calls in JSON, rather than SQL - there's no SQL equivalent.
For a walkthrough of these, there's a Python notebook in the learn-druid repo from Imply:
Noting that all tables in Druid require a timestamp, you could technically use a SQL-based ingestion to create a standard table, perhaps using inline data. IMHO that would be a hack.