javajvmbytecodejvm-bytecodejava-record

Bytecode transforming record class to be mutable


I just saw that EBean does bytecode transformation of record class files in a way that feels odd to me and I seek an answer about whether this is legal from a JVM point of view.

Apparently, it is possible to have a class file, where the class extends java.lang.Record and defines record component attributes (so it's a "record" like javac would create it), but with the following additional "features" which javac would not allow:

To me, this seems illegal and I would have expected a JVM verification error. I would like to know if this is something that is "supported", which I can build upon, or if the lack of verification is a JVM bug. Are records just a Java language feature without JVM support?! I read that final fields of records are "truly final" and can't be changed even through reflection and assumed there must be special JVM support that makes sure records match the Java language semantics...


Solution

  • Your question posits a false dichotomy. Records are a language feature, with some degree of JVM support (primarily for reflection), but that doesn't mean that the JVM will (or even can) enforce all the requirements on records that the language requires. (Gaps like this are inevitable, as the JVM is a more general computing substrate, and which services other languages besides Java; for example, the JVM permits methods to be overloaded on return type, but the language does not.)

    That said, the behavior you describe is a pretty serious party foul, and those who engage in it should be shamed out of the community. (It is also possible they are doing so out of ignorance, in which case they might be educable.) Most likely, these people think they are being "clever" in subverting rules they don't like, but in the process, poisoning the well by promoting behaviors that users may find astonishing.

    EDIT

    The author of the transformer posted some further context here about what they were trying to accomplish. I'll give them credit for making a good faith effort to conform with the semantics of records, but it undermines the final field semantics, and only appears to work for records that do not customize the constructor, accessors, equals, or hashCode methods. This describes a lot of records, but not all. This is a good cautionary tale; even when trying to preserve the semantics of a class while transforming it, it is easy to make questionable assumptions about what the class does or does not do that can get in the way.

    The author waves away the concern about the final field semantics as "not likely to cause a problem." But this is not the bar. The language provides certain semantics for records. This transformation undermines those semantics, and yet still tells the user they are records. Even if they are "minor" and "unlikely", you are breaking the semantics that the Java language promises. "99% compatible" rounds to zero in this case. So I stand by my assertion that this framework is taking inappropriate liberties with the language semantics. They may have been well-intentioned, they may have tried hard to not break things, but break things they did.