I’m using Databricks Community Edition (Free tier) with Spark 4.0.0. I noticed that the UI no longer allows creating a standard cluster — only the serverless compute option is available.
I tried the following code:
spark.conf.get("spark.sql.adaptiveExecution.enabled")
But I got the error:
"[CONFIG_NOT_AVAILABLE] Configuration spark.sql.adaptiveExecution.enabled is not available. SQLSTATE: 42K0I"
And when I tried df.rdd.getNumPartitions(), I got the below error.
Using custom code using PySpark RDDs is not allowed on serverless compute.We suggest using mapInPandas or mapInArrow for the most common use cases
Does this serverless compute in free edition have limitations?Is there a workaround here to use these?
Yep, RDD API is not supported on serverless.