I run into this problem, while playing with generics and custom operators in Swift. In the code snippet below, I am introducing two new prefix operators, ∑ and ∏ and then implement their prefix functions as vector sum and product respectively. In order not to have to implement these and similar functions for all the integer and floating point types separately, I defined two protocols instead: Summable (that requires + implementation) and Multiplicable (that requires * implementation). Also, I implemented the two functions for SequenceType arguments which works, for example, with Array and Rage types. Finally, you can see from the println calls at the end of the snippet that this all works quite nicely, except for ∏(1...100). Here the program crashes with EXC_BAD_INSTRUCTION and not much else to go on. Note that ∑(1...100) works, even though it is implemented in the same way. In fact, if I change the initial value in the line return reduce(s, 1, {$0 * $1})
to 0 than the program completes without error, albeit with wrong outputs from calls to ∏.
So, it all boils down to using 0 or 1 as an initial value!? When the code in the offending line is unpacked over several lines, it becomes clear that the crash occurs at $0 * $1
. Note also that instead of the closures {$0 * $1}
and {$0 + $1}
I should be able to pass + and * operator functions directly. Alas, this offends the compiler: "Partial application of generic method is not allowed".
Any ideas? How could swapping 1 (or any non zero Int) for 0 cause a crash? And why does this only happen with ranges for multiplication, while ranges for addition with either 0 or 1 initial values works fine?
prefix operator ∑ {}
prefix operator ∏ {}
protocol Summable { func +(lhs: Self, rhs: Self) -> Self }
protocol Multiplicable { func *(lhs: Self, rhs: Self) -> Self }
extension Int: Summable, Multiplicable {}
extension Double: Summable, Multiplicable {}
prefix func ∑<T, S: SequenceType where T == S.Generator.Element,
T: protocol<IntegerLiteralConvertible, Summable>>(var s: S) -> T {
return reduce(s, 0, {$0 + $1})
}
prefix func ∏<T, S: SequenceType where T == S.Generator.Element,
T: protocol<IntegerLiteralConvertible, Multiplicable>>(var s: S) -> T {
return reduce(s, 1, {$0 * $1})
}
let ints = [1, 2, 3, 4]
let doubles: [Double] = [1, 2, 3, 4]
println("∑ints = \( ∑ints )") // --> ∑ints = 10
println("∑doubles = \( ∑doubles )") // --> ∑doubles = 10.0
println("∑(1...100) = \( ∑(1...100) )") // --> ∑(1...100) = 5050
println("∏ints = \( ∏ints )") // --> ∏ints = 24
println("∏doubles = \( ∏doubles )") // --> ∏doubles = 24.0
println("∏(1...100) = \( ∏(1...100) )") // --> CRASH: EXC_BAD_INSTRUCTION
EDIT: While rather embarrassing for me, the error I am making in this code makes for a cute test of your programming eye. See if you can figure it out before reading Martin's answer below. You'll feel good about yourself when you do. (I however, might need to look for another career.)
That is a simple integer overflow. You try to compute the factorial
1 * 2 * ... * 100 = 100!
= 93326215443944152681699238856266700490715968264381621468592963895217599993229915608941463976156518286253697920827223758251185210916864000000000000000000000000
≈ 9.33 × 10^157
according to Wolfram Alpha. With an initial value of 0
instead of 1
, all products are zero and the overflow does not
occur.
∏(1...20) = 2432902008176640000
works as expected and is the largest factorial that can be stored in a 64-bit integer.
In Swift, integer calculations do not "wrap around", but cause an exception if the result does not fit into the target datatype.
Swift has special "overflow operators"
&+
, &*
, ... with a different overflow behavior for integer calculations,
see "Overflow Operators"
in the Swift documentation.