ArrowInvalid: Unable to merge: Field X has incompatible types: string vs dictionary<values=string, indices=int32, ordered=0>
ArrowInvalid: Unable to merge: Field X has incompatible types: decimal vs int32
I am trying to write the result of a snowflake query on disk and then query that data using arrow and duckdb. I have created a partitioned parquet with the query bellow following this:
COPY INTO 's3://path/to/folder/'
FROM (
SELECT transaction.TRANSACTION_ID, OUTPUT_SCORE, MODEL_NAME, ACCOUNT_ID, to_char(TRANSACTION_DATE,'YYYY-MM') as SCORE_MTH
FROM transaction
)
partition by('SCORE_MTH=' || score_mth || '/ACCOUNT_ID=' || ACCOUNT_ID)
file_format = (type=parquet)
header=true
When I try to read the parquet files I get the following error:
df = pd.read_parquet('path/to/parquet/') # same result using pq.ParquetDataset or pq.read_table as they all use the same function under the hood
ArrowInvalid: Unable to merge: Field SCORE_MTH has incompatible types: string vs dictionary<values=string, indices=int32, ordered=0>
Moreover, following some google search I found this page. Following the instructions: df = pd.read_parquet('path/to/parquet/', use_legacy_dataset=True)
ValueError: Schema in partition[SCORE_MTH=0, ACCOUNT_ID=0] /path/to/parquet was different.
TRANSACTION_ID: string not null
OUTPUT_SCORE: double
MODEL_NAME: string
ACCOUNT_ID: int32
SCORE_MTH: string
vs
TRANSACTION_ID: string not null
OUTPUT_SCORE: double
MODEL_NAME: string
Also based on what the data type is you may get this error:
ArrowInvalid: Unable to merge: Field X has incompatible types: IntegerType vs DoubleType
or
ArrowInvalid: Unable to merge: Field X has incompatible types: decimal vs int32
This is a known issue.
Any idea how I can read this parquet file?
The only work around I found that works is this:
import pyarrow.dataset as ds
dataset = ds.dataset('/path/to/parquet/', format="parquet", partitioning="hive")
then you can query directly using duckdb
:
import duckdb
con = duckdb.connect()
pandas_df = con.execute("Select * from dataset").df()
Also if you want a pandas dataframe you can do this:
dataset.to_table().to_pandas()
Note that to_table()
will load the whole dataset into memory.