I have following DTOs:
ProductDTO
UserDTO
UserRoleDTO:
I need to fetch a list of Products and each of those Products has a field "creatorId" (NULLABLE).
Product -> 1:1 -> User -> 1:N -> UserRoles
I got this to work with following jooq query:
dsl()
.select(
asterisk(),
multiset(
selectDistinct(
asterisk()
)
.from(UserRole.USER_ROLE)
.where(UserRole.USER_ROLE.USERID.eq(Product.PRODUCT.CREATORID))
).as("roles")),
.from(Product.PRODUCT)
.leftJoin(User.USER)
.on(User.USER.USERID.eq(Product.PRODUCT.CREATORID))
.where(...)
.groupBy(Product.PRODUCT.PRODUCTID)
.orderBy(...)
.offset(...)
.limit(...)
.fetch().map(new RecordMapper<Record, ProductDTO>() {
@Override
public @Nullable ProductDTO map(Record record) {
ProductDTO product = rec.into(ProductDTO.class);
UserDTO creator = rec.into(UserDTO.class);
List<UserRoleDTO> creatorRoles = rec.get("roles", Result.class).into(UserRoleDTO.class);
creator.setRoles(creatorRoles);
product.setCreator(creator);
return product;
}
});
Now my question:
is this mapping also possible "implicitly" without the explicit "RecordMapper" part, so the mapping is reusable with less boilerplate for different statements (fetch, fetchCount, fetchStream) ? <-- (also possible in this example, but RecordMapper must be defined on each of the statements again)
Im already satisfied i got this to work, as i only found examples for using multiset with direct 1:N relationships, but what i have here is a 1:1 relationship that (on a deeper level) also contains a 1:N relationship.
Normally one could just make two separate SELECT-Statements, but for my use-case (Remote-Pagination with Filtering), it is necessary that all! relationships are processed in one single SELECT-Statements, so i can use the join-statements for additional filtering on the query.
Some general rules of thumb:
asterisk()
operator.class
literals in your mappers (which use reflection, which isn't type safe). Of course you can still do this, it's supported in principle, but then, you won't know if you wrote a correct query and mapping until you run it.Doesn't this look much simpler?
dsk().select(
// Project the PRODUCT fields
PRODUCT.ID,
PRODUCT.NAME,
...
// Nest a projection of USER fields (using implicit joins)
row(PRODUCT.user().ID, PRODUCT.user().NAME, ...).mapping(UserDTO::new),
// Nest a collection of USER_ROLE fields (using join path correlation)
multiset(
select(
PRODUCT.user().role().ID,
PRODUCT.user().role().NAME, ...)
.from(PRODUCT.user().role())
).convertFrom(r -> r.map(Records.mapping(UserRoleDTO::new))))
.from(PRODUCT)
.where(...)
.orderBy(...)
.offset(...)
.limit(...)
.fetch(Records.mapping(ProductDTO::new));
This is assuming you design your DTOs like this, for example:
record UserRoleDTO(long id, String name, ...) {}
record UserDTO(long id, String name, ...) {}
record ProductDTO(long id, String name, UserDTO creator, List<UserRoleDTO> roles) {}
For simplicity, the above is using
Both of these features really simplify these MULTISET
queries, though, they're obviously optional. You can still use explicit joins if you prefer that.
COUNT(*)
valueRather than running an extra round trip to count the outcome of a query, there's an option of using window functions instead, to calculate the COUNT(*)
value in a single query.
Though, given that the COUNT(*)
value is independent of the whole USER
and USER_ROLE
nesting, perhaps running 2 seperate queries is better anyway, where the COUNT(*)
query omits all the nested data, depending on how exactly you intend to present pagination.