scalatype-conversionimplicitscala-3given

Why can't the compiler chain conversions?


Let T1, T2, T3 be three types. We also define two given instances of the Conversion class so that the compiler can go from T1 to T2 and from T2 to T3.

The following code then compiles fine:

type T1
type T2
type T3

given Conversion[T1, T2] with
    override def apply(x: T1): T2 = ???

given Conversion[T2, T3] with
    override def apply(x: T2): T3 = ???

val t1: T1 = ???
val t2: T2 = t1
val t3: T3 = t2

But what happens when we try to go from T1 to T3 ? The compiler won't let us:

val t3: T3 = t1
             ^^
             Found:    (t1: T1)
             Required: T3

My Question: is there a specific reason why the compiler cannot natively (see workaround) chain conversions ?

My Workaround: it turns out that we can implicitly tell to compiler how to chain conversions by defining a generic conversion from A to C, given that we know how to convert A to B and B to C:

given [A, B, C](using conv1: Conversion[A, B])(using conv2: Conversion[B, C]): Conversion[A, C] with
    override def apply(x: A): C = conv2.apply(conv1.apply(x))

To compiler can now chain conversions:

val t3: T3 = t1 //OK

Bonus point: this new generic conversion can even recursively call itself, meaning that we can chain infinite conversions.


Solution

  • It's not a question of "can't" but "doesn't". The limitation is a consciously chosen one.

    Odersky, Spoon, and Venners Programming in Scala (I'm quoting Third Edition, using Scala 2 terminology, but modulo the terminology difference it still applies):

    One at a time rule: Only one implicit [conversion] is inserted. The compiler will never rewrite x + y to convert1(convert2(x)) + y. Doing so would cause compile times to increase dramatically on erroneous code, and it would increase the difference between what the programmer writes and what the program actually does. For sanity's sake, the compiler does not insert further implicit conversions when it's in the middle of trying another implicit conversion. However, it's possible to circumvent this restriction by having implicits take implicit parameters.

    Elaborating on the argument, beyond the more subjective argument around the difference between the apparent explicit meaning of the program and the actual meaning after applying the conversions (one could just as easily argue (and (implicitly...) nearly every other language designer does by not having Scala-style conversions) that trying any implicit conversions unduly increases the difference between what is written and what the program actually does), consider the consequence of having no limit on the conversion depth (recall the dictum: "zero, one, or no limit")

    If I have a type that doesn't have a method foo(), an expression like x.foo() can't be rejected by the compiler until it calculates the transitive closure of type conversions from the type of x and finds not a single one has a foo() method. In an IDE this would mean a pretty long delay until the "red squiggly" appears (or the IDE might assume that after some depth of conversions, it's probably worth a squiggly, not that there are any widely used IDEs in Scala that have a history of disagreeing with what the actual compiler accepts/rejects...).

    Even for accepted programs, there's a case to be made that all possible paths through the conversion space should be tried: the compiler as a general rule (unless there's one conversion that is "more specific" than the others) will not apply a conversion when there are multiple candidates which could apply and the compiler can't know how many candidates there are without potentially trying every path (it could stop once it found a second conversion, but in the typical case where there's only one, it would find a transitive closure before accepting). By limiting to one conversion, the debate about how to handle this is nullified.