pythonapache-sparkpyspark

Pyspark replace strings in Spark dataframe column


I'd like to perform some basic stemming on a Spark Dataframe column by replacing substrings. What's the quickest way to do this?

In my current use case, I have a list of addresses that I want to normalize. For example this dataframe:

id     address
1       2 foo lane
2       10 bar lane
3       24 pants ln

Would become

id     address
1       2 foo ln
2       10 bar ln
3       24 pants ln

Solution

  • For Spark 1.5 or later, you can use the functions package:

    from pyspark.sql.functions import regexp_replace
    newDf = df.withColumn('address', regexp_replace('address', 'lane', 'ln'))
    

    Quick explanation: