This is my dataframe:
from pyspark.sql import SparkSession
from pyspark.sql import functions as F
spark = SparkSession.builder.getOrCreate()
dCols = ['c1', 'c2']
dData = [('a', 'b'),
('c', 'd'),
('e', None)]
df = spark.createDataFrame(dData, dCols)
Is there a syntax to include null
inside .isin()
?
Something like
df = df.withColumn(
'newCol',
F.when(F.col('c2').isin({'d', None}), 'true') # <=====?
.otherwise('false')
).show()
After executing the code I get
+---+----+------+
| c1| c2|newCol|
+---+----+------+
| a| b| false|
| c| d| true|
| e|null| false|
+---+----+------+
instead of
+---+----+------+
| c1| c2|newCol|
+---+----+------+
| a| b| false|
| c| d| true|
| e|null| true|
+---+----+------+
I would like to find a solution where I would not need to reference the same column twice, as we need to do now:
(F.col('c2') == 'd') | F.col('c2').isNull()
One reference to the column is not enough in this case. To check for nulls you need to use a separate isNull
method.
Also, if you want a column of true/false
, you can cast the result to Boolean directly without using when
:
import pyspark.sql.functions as F
df2 = df.withColumn(
'newCol',
(F.col('c2').isin(['d']) | F.col('c2').isNull()).cast('boolean')
)
df2.show()
+---+----+------+
| c1| c2|newCol|
+---+----+------+
| a| b| false|
| c| d| true|
| e|null| true|
+---+----+------+