I'm curious as to when evaluation sets in, apparently certain operators are rather transformed into clauses than evaluated:
abstract sig Element {}
one sig A,B,C extend Element {}
one sig Test {
test: set Element
}
pred Test1 { Test.test = A+B }
pred Test2 { Test.test = Element-C }
and running it for Test1
and Test2
respectively will give different numbers for vars/clauses, specifically:
Test1: 0 vars, 0 primary vars, 0 clauses
Test2: 5 vars, 3 primary vars, 4 clauses
So although Element
is abstract
and all its members and their cardinalities are known, the difference seems not to be computed in advance, while the sum is. I don't want to make any assumptions, so I'm interested in why that is. Is the +
operator special?
To give some context, I tried to limit the domain of a relation and found, that using only +
seems to be more efficient, even when the sets are completely known in advance.
To give some context, I tried to limit the domain of a relation and found, that using only + seems to be more efficient, even when the sets are completely known in advance.
That is pretty much the right conclusion. The reason is the fact that the Alloy Analyzer tries to infer relation bounds from certain Alloy idioms. It uses a conservative approximation that is always sound for set union and product, but not for set difference. That's why for Test1
in the example above the Alloy Analyzer infers a fixed bound for the test
relation (this/Test.test: [[[A$0], [B$0]]]
) so no solver needs to be invoked; for Test2
, the bound for the test
relation cannot be shrunk so is set to be the most permissive (this/Test.test: [[], [[A$0], [B$0], [C$0]]]
), thus a solver needs to be invoked to find a solution satisfying the constraints given the bounds.