javaapache-sparkgetter-setter

My classes getters and setters are now prefix-less. But Spark isn't able to use their objects correctly, as if get and set prefixes were mandatory


For convenience, I'm switching my business classes to a style where getters and setters methods do not have get and set prefixes anymore. By the way, I'm also making setters chainable (fluid). For example:

With private NatureJuridiqueGroupement natureJuridique; an enum member,

public NatureJuridiqueGroupement getNatureJuridique() {
   return this.natureJuridique;
}

public void setNatureJuridique(NatureJuridiqueGroupement natureJuridique) {
   this.natureJuridique = natureJuridique;
}

is becoming:

public NatureJuridiqueGroupement natureJuridique() {
   return this.natureJuridique;
}

@SuppressWarnings("unchecked")
public SELF natureJuridique(NatureJuridiqueGroupement natureJuridique) {
   this.natureJuridique = natureJuridique;
   return (SELF)this;
}

The problem now, is that if I have an object Groupement having for members:

interdepartemental, zoneDeMontagne, gestionDesEaux, natureJuridique, fiscalitePropre, siren where I have for them these getters methods:

interdepartemental(), zoneDeMontagne(), gestionDesEaux(),
natureJuridique(), getNatureJuridique()
isFiscalitePropre()

and that I run on my Spark 3.5.6 a:

Dataset<Groupement> perimetres = myBuildingMethod(..my parameters..);
perimetres.show(2000, false);

it returns what is shown below, ignoring (not detecting) the prefix-less getters.
(and therefore setters too, if they are prefix-less. Here my code is working because my Groupement object is created by the Encoder by a constructor, and not a bunch of setter call)

fiscalitePropre natureJuridique siren
false SYNDICAT_MIXTE_FERME 200004000
false SYNDICAT_MIXTE_FERME 200004000
true COMMUNAUTE_AGGLOMERATION 200010700
true COMMUNAUTE_AGGLOMERATION 200010700

Is there a way to make Spark detecting these getters and setters methods (automatically, like it did until now), or am I doomed to put my old getters and setters back?


Solution

  • Spark uses JavaBeans reflection to inspect POJOs, so it requires getX()/isX() for getters and setX(Type) for setters.

    By removing the get/set prefixes and using overloaded methods for fluent setters, your methods no longer adhere to JavaBeans conventions, so Spark is unable to detect them when inferring the schema.

    I know this may sound anti-pattern, but what you can do is to keep your getter and setter and set them as deprecated, purely for compatibility with Spark.

    @Deprecated 
    public NatureJuridiqueGroupement getNatureJuridique() 
    {
        return natureJuridique;
    }
    

    You can now use natureJuridique() internally and in tests. Spark doesn't care about deprecation, so using @Deprecated works in this context not because Spark uses it, but because Java (and tools like Spark) still recognize the method as a valid getter due to its signature.


    The other option would be to stick with your actual code, but to add the getter and setters just for Spark:

    public NatureJuridiqueGroupement natureJuridique() 
    {
        return this.natureJuridique;
    }
    
    public SELF natureJuridique(NatureJuridiqueGroupement val) 
    {
        this.natureJuridique = val;
        return (SELF)this;
    }
    
    // Spark only:
    public NatureJuridiqueGroupement getNatureJuridique() {
        return this.natureJuridique;
    }