I have a PySpark dataframe.
df1 = spark.createDataFrame([
("u1", [0, 1, 2]),
("u1", [1, 2, 3]),
("u2", [2, 3, 4]),
],
['user_id', 'features'])
print(df1.printSchema())
df1.show(truncate=False)
Output-
root
|-- user_id: string (nullable = true)
|-- features: array (nullable = true)
| |-- element: long (containsNull = true)
None
+-------+---------+
|user_id|features |
+-------+---------+
|u1 |[0, 1, 2]|
|u1 |[1, 2, 3]|
|u2 |[2, 3, 4]|
+-------+---------+
I want to get the L2 norm of the features, so I wrote a UDF-
def norm_2_func(features):
return features/np.linalg.norm(features, 2)
norm_2_udf = udf(norm_2_func, ArrayType(FloatType()))
df2 = df1.withColumn('l2_features', norm_2_udf(F.col('features')))
But it is throwing some errors. How can I achieve this?
The expected output is -
+-------+---------+----------------------+
|user_id|features | L2_norm|
+-------+---------+----------------------+
|u1 |[0, 1, 2]| [0.000, 0.447, 0.894]|
|u1 |[1, 2, 3]| [0.267, 0.534, 0.801]|
|u2 |[2, 3, 4]| [0.371, 0.557, 0.742]|
+-------+---------+----------------------+
Numpy arrays contain numpy dtypes which needs to be cast to normal Python dtypes (float/int etc.) before returning:
import numpy as np
import pyspark.sql.functions as F
from pyspark.sql.types import ArrayType, FloatType
def norm_2_func(features):
return [float(i) for i in features/np.linalg.norm(features, 2)]
# you can also use
# return list(map(float, features/np.linalg.norm(features, 2)))
norm_2_udf = F.udf(norm_2_func, ArrayType(FloatType()))
df2 = df1.withColumn('l2_features', norm_2_udf(F.col('features')))
df2.show(truncate=False)
+-------+---------+-----------------------------------+
|user_id|features |l2_features |
+-------+---------+-----------------------------------+
|u1 |[0, 1, 2]|[0.0, 0.4472136, 0.8944272] |
|u1 |[1, 2, 3]|[0.26726124, 0.5345225, 0.80178374]|
|u2 |[2, 3, 4]|[0.37139067, 0.557086, 0.74278134] |
+-------+---------+-----------------------------------+