In our database class, our instructor showed this as an example of a dependency-preserving decomposition:
R(A, B, C) with F = { A->B, B->C } decomposed into R1(A, B) and R2(A, C)
In order for a decomposition to be dependency preserving, the database system must be able to check each functional dependency of the original F locally in one of the decomposed relations, without having to perform any joins.
Here, it's my understanding that the functional dependency B->C
is lost because it cannot be checked locally in either R1
or R2
. But my instructor claimed that it is preserved by transitivity since A->C
.
Could someone please clarify why that's the case?
In order for a decomposition to be dependency preserving, the database system must be able to check each functional dependency of the original F locally in one of the decomposed relations, without having to perform any joins.
No
By definition, given a schema R with a cover of functional dependencies F, a decomposition is dependency preserving if and only if the union of the projections of the dependencies F over the decomposed relations is a cover of F, where the projection of F over a subschema is constituted by all the dependencies in F+ (not in F) with attributes all contained in the subschema.
For instance, in a schema R(A, B, C) with a cover of functional dependencies F = {A → B, B → C, C → A}, with a decomposition R1(A, B) and R2(B, C), the projection of F on R1 contains {A → B, B → A}, and the projection of F on R2 contains {B → C, C → B}. This is because B → A and C → B can be derived from the other dependencies of F. In this case, when we do the union of the the projections, we obtain a set of dependencies from which also C → A can be derived (and so this decomposition preserves the dependencies).
In your example the projection of F on R1 gives {A → B}, while the projection of F on R2 gives {A → C}. Making the union of the two sets, we obtain:
{A → B, A → C}
From this set we cannot derive B → C, so the decomposition does not preserve the dependencies.