I have this data frame:
val df = (
spark
.createDataFrame(
Seq((1L, 2L), (1L, 5L), (1L,8L), (2L,4L), (2L,6L), (2L,8L))
)
.toDF("A","B")
.groupBy("A")
.agg(collect_list("B").alias("B"))
)
And I would like to transform it to the following form:
val dfTransformed =
(
spark
.createDataFrame(
Seq(
(1, Vectors.sparse(9, Seq((2, 1.0), (5,1.0), (8,1.0)))),
(2, Vectors.sparse(9, Seq((4, 1.0), (6,1.0), (8,1.0))))
)
).toDF("A", "B")
)
I want to do this so that I can use the MinHashLSH transformation (https://spark.apache.org/docs/2.2.3/api/scala/index.html#org.apache.spark.ml.feature.MinHashLSH).
I have tried with a UDF as follows but without success:
def f(x:Array[Long]) = Vectors.sparse(9, x.map(p => (p.toInt,1.0)).toSeq)
val udff = udf((x:Array[Long]) => f(x))
val dfTransformed = df.withColumn("transformed", udff(col("B"))).show()
Could anyone help me, please?
Use Seq
for UDF, not Array
:
def f(x: Seq[Long]) = Vectors.sparse(9, x.map(p => (p.toInt,1.0)))
val udff = udf((x: Seq[Long]) => f(x))
val dfTransformed = df.withColumn("transformed", udff(col("B")))
dfTransformed.show(false)
+---+---------+-------------------------+
|A |B |transformed |
+---+---------+-------------------------+
|1 |[2, 5, 8]|(9,[2,5,8],[1.0,1.0,1.0])|
|2 |[4, 6, 8]|(9,[4,6,8],[1.0,1.0,1.0])|
+---+---------+-------------------------+