pythonpython-3.xpysparkpattern-matchingfpgrowth

Convert StringType Column To ArrayType In PySpark


I have a dataframe with column "EVENT_ID" whose datatype is String. I am running FPGrowth algorithm but throws the below error

Py4JJavaError: An error occurred while calling o1711.fit. 
:java.lang.IllegalArgumentException: requirement failed: 
The input column must be array, but got string.

The column EVENT_ID has values

E_34503_Probe
E_35203_In
E_31901_Cbc

I am using the below code to convert the string column to arraytype

df2 = df.withColumn("EVENT_ID", df["EVENT_ID"].cast(types.ArrayType(types.StringType())))

But I get the following error

Py4JJavaError: An error occurred while calling o1874.withColumn.
: org.apache.spark.sql.AnalysisException: cannot resolve '`EVENT_ID`' due to data type mismatch: cannot cast string to array<string>;;

How do I either cast this column to array type or run the FPGrowth algorithm with string type?


Solution

  • Original answer

    Try the following.

    In  [0]: from pyspark.sql.types import StringType
             from pyspark.sql.functions import col, regexp_replace, split
    
    In  [1]: df = spark.createDataFrame(["E_34503_Probe", "E_35203_In", "E_31901_Cbc"], StringType()).toDF("EVENT_ID")
             df.show()
    Out [1]: +-------------+
             |     EVENT_ID|
             +-------------+
             |E_34503_Probe|
             |   E_35203_In|
             |  E_31901_Cbc|
             +-------------+
    
    In  [2]: df_new = df.withColumn("EVENT_ID", split(regexp_replace(col("EVENT_ID"), r"(^\[)|(\]$)|(')", ""), ", "))
             df_new.printSchema()
    Out [2]: root
              |-- EVENT_ID: array (nullable = true)
              |    |-- element: string (containsNull = true)
    

    I hope it will be helpful.

    Edited answer

    As @pault pointed out very well in his comment, much easier solution is following:

    In  [0]: from pyspark.sql.types import StringType
             from pyspark.sql.functions import array
    
    In  [1]: df = spark.createDataFrame(["E_34503_Probe", "E_35203_In", "E_31901_Cbc"], StringType()).toDF("EVENT_ID")
             df.show()
    Out [1]: +-------------+
             |     EVENT_ID|
             +-------------+
             |E_34503_Probe|
             |   E_35203_In|
             |  E_31901_Cbc|
             +-------------+
    
    In  [2]: df_new = df.withColumn("EVENT_ID", array(df["EVENT_ID"]))
             df_new.printSchema()
    Out [2]: root
               |-- EVENT_ID: array (nullable = false)
               |    |-- element: string (containsNull = true)