I'm trying to build a web api for my apache spark jobs using sparkjava.com framework. My code is:
@Override
public void init() {
get("/hello",
(req, res) -> {
String sourcePath = "hdfs://spark:54310/input/*";
SparkConf conf = new SparkConf().setAppName("LineCount");
conf.setJars(new String[] { "/home/sam/resin-4.0.42/webapps/test.war" });
File configFile = new File("config.properties");
String sparkURI = "spark://hamrah:7077";
conf.setMaster(sparkURI);
conf.set("spark.driver.allowMultipleContexts", "true");
JavaSparkContext sc = new JavaSparkContext(conf);
@SuppressWarnings("resource")
JavaRDD<String> log = sc.textFile(sourcePath);
JavaRDD<String> lines = log.filter(x -> {
return true;
});
return lines.count();
});
}
If I remove the lambda expression or put it inside a simple jar rather than a web service (somehow a servlet) it will run without any error. But using a lambda expression inside a servlet will result this exception:
15/01/28 10:36:33 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, hamrah): java.lang.ClassCastException: cannot assign instance of java.lang.invoke.SerializedLambda to field org.apache.spark.api.java.JavaRDD$$anonfun$filter$1.f$1 of type org.apache.spark.api.java.function.Function in instance of org.apache.spark.api.java.JavaRDD$$anonfun$filter$1
at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2089)
at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1261)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1999)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1918)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1993)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1918)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1993)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1918)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:87)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:57)
at org.apache.spark.scheduler.Task.run(Task.scala:56)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
P.S: I tried combination of jersey and javaspark with jetty, tomcat and resin and all of them led me to the same result.
What you have here, is a follow-up error which masks the original error.
When lambda instances are serialized, they use writeReplace
to dissolve their JRE specific
implementation from the persistent form which is a SerializedLambda
instance. When the SerializedLambda
instance has been restored, its readResolve
method will be invoked
to reconstitute the appropriate lambda instance. As the documentation says, it will do so by invoking a special method of the class which defined the original lambda (see also this answer). The important point is that the original class is needed and that’s what’s missing in your case.
But there’s a …special… behavior of the ObjectInputStream
. When it encounters an exception, it doesn’t bail out immediately. It will record the exception and continue the process, marking all object being currently read, thus depending on the erroneous object as being erroneous as well. Only at the end of the process it will throw the original exception it encountered. What makes it so strange is that it will also continue trying to set the fields of these object. But when you look at the method ObjectInputStream.readOrdinaryObject
line 1806:
…
if (obj != null &&
handles.lookupException(passHandle) == null &&
desc.hasReadResolveMethod())
{
Object rep = desc.invokeReadResolve(obj);
if (unshared && rep.getClass().isArray()) {
rep = cloneArray(rep);
}
if (rep != obj) {
handles.setObject(passHandle, obj = rep);
}
}
return obj;
}
you see that it doesn’t call the readResolve
method when lookupException
reports a non-null
exception. But when the substitution did not happen, it’s not a good idea to continue trying to set the field values of the referrer but that’s exactly what’s happens here, hence producing a ClassCastException
.
You can easily reproduce the problem:
public class Holder implements Serializable {
Runnable r;
}
public class Defining {
public static Holder get() {
final Holder holder = new Holder();
holder.r=(Runnable&Serializable)()->{};
return holder;
}
}
public class Writing {
static final File f=new File(System.getProperty("java.io.tmpdir"), "x.ser");
public static void main(String... arg) throws IOException {
try(FileOutputStream os=new FileOutputStream(f);
ObjectOutputStream oos=new ObjectOutputStream(os)) {
oos.writeObject(Defining.get());
}
System.out.println("written to "+f);
}
}
public class Reading {
static final File f=new File(System.getProperty("java.io.tmpdir"), "x.ser");
public static void main(String... arg) throws IOException, ClassNotFoundException {
try(FileInputStream is=new FileInputStream(f);
ObjectInputStream ois=new ObjectInputStream(is)) {
Holder h=(Holder)ois.readObject();
System.out.println(h.r);
h.r.run();
}
System.out.println("read from "+f);
}
}
Compile these four classes and run Writing
. Then delete the class file Defining.class
and run Reading
. Then you will get a
Exception in thread "main" java.lang.ClassCastException: cannot assign instance of java.lang.invoke.SerializedLambda to field test.Holder.r of type java.lang.Runnable in instance of test.Holder
at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2089)
at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1261)
(Tested with 1.8.0_20)
The bottom line is that you may forget about this Serialization issue once it is understood what’s happening, all you have to do for solving your problem is to make sure that the class which defined the lambda expression is also available in the runtime where the lambda is deserialized.
Example for Spark Job to run directly from IDE (spark-submit distributes jar by default):
SparkConf sconf = new SparkConf()
.set("spark.eventLog.dir", "hdfs://nn:8020/user/spark/applicationHistory")
.set("spark.eventLog.enabled", "true")
.setJars(new String[]{"/path/to/jar/with/your/class.jar"})
.setMaster("spark://spark.standalone.uri:7077");