I was looking through some code on GitHub and came across this type declaration:
abstract class TreeNode[BaseType <: TreeNode[BaseType]] extends Product with TreePatternBits {
(This is from the Spark source code, though this question is not really about Spark but Scala's type system.)
This is new to me. In plain English, what does this type bound mean?
Hardly anyone reads white papers on System F (mentioned in the comments) and people are still using F-bound type polymorphism intuitively.
All that you need to understand is that class/trait MyType[A <: MyType[A]]
forces you to extend it as class Impl extends MyType[Impl]
, that is passing itself as a type parameter.
trait Interface[A <: Interface[A]] {
def getValue: String
def setValue(value: String): A
}
Which is useful if you want to do something like:
class Impl(value: String) extends Interface[Impl] {
override def getValue: String = value
override def setValue(value: String): Impl = new Impl(value)
}
Without this A <: Interface[A]
(type A
is a subtype of/extends Impl[A]
) someone could have implemented:
// no F-bound type polymorphism
trait Interface[A] {
def getValue: String
def setValue(value: String): A
}
class Foo
class Impl extends Inteface[Foo] {
override def getValue: String = value
// we wanted to force implementation to have Impl here but now we cannot
override def setValue(value: String): Foo = new Foo
}
It's useful if you have interfaces which e.g. model some algebras e.g.:
trait Number[A <: Number[A]] {
def add(a: A): A
}
To have the implementation the same input/output of methods as its own type:
class Integer(val i: Int) extends Number[Integer] {
def add(a: Integer): Integer = new Integer(i + a.i)
}
It's more popular in OOP, as FP offers an alternative approach with the usage of type classes.