I have an application where I need to filter two separate tabular models, and return a table of summarized results from each, using values retrieved from a third semantic model for calculate context.
The process I have built creates a datatable thousands of rows long using M lang. ie:
let
// Returns a text value which is dynamically built which then gets inserted into query as DATATABLE
// Summarized values are added to it with ADDCOLUMNS
_filter_table = let src = AnalysisServices.Query("myFirstSource", "schedule", [Query="evaluate addcolumns(
summarizecolumns(calendar[Day], times[EasternTime], ""startTime"", min(schedule[...
, ""row_string"", ""{"" & format([Day], "Short Date") & ", " & format([EasternTime], ""hh:mm"")
& "... etc. etc. ..."]) in Table.Buffer(src),
fact1 = let qrytext = "define table filter_table = datatable(""ColumnName"", STRING, ""Date"", DATETIME
, ""StartTime"", DATETIME, ""EndTime"" DATETIME, {"
& Text.Combine(_filter_table[row_string], ",")
& "}) evaluate ADDCOLUMNS( filter_table
, ""Measurement"", CALCULATE([SomeMeasure]
, local_table[Column]=[ColumnName]
, time_table[Period]>=[StartTime]
, time_table[Period]<=[EndTime]
, calendar[Day]=[Date]))",
...
result = AnalysisServices.Query("my source", "widgets", [Query=qrytext]) in result,
fact2 = let qrytext = "define table filter_table = datatable(""ColumnName"", STRING, ""Date"", DATETIME
, ""StartTime"", DATETIME, ""EndTime"" DATETIME, {"
& Text.Combine(_filter_table[row_string], ",")
& "}) evaluate ADDCOLUMNS( filter_table
, ""Measurement"", CALCULATE([DifferentMeasure]
, local_table[Column]=[ColumnName]
, time_table[Period]>=[StartTime]
, time_table[Period]<=[EndTime]
, calendar[Day]=[Date]))",
...
result = AnalysisServices.Query("my second source", "whatsits", [Query=qrytext]) in result,
finished_product = Table.Combine({fact1, fact2})
in
finished_product
In this way I can use the result of one analysis service query to help me perform calculations in two disparate semantic models. It isn't surprising- this performs pretty badly as the number of rows in the filter table grows.
What is a better pattern to follow when I have thousands (or hundreds of thousands) of filter rows that are only available dynamically from a query to an unrelated semantic model?
"use Direct Query and DAX to query your other semantic models"
Start by building a composite model in Power BI Desktop, referencing your other models. https://learn.microsoft.com/en-us/power-bi/transform-model/desktop-composite-models
Once you build the composite model, then you can send it simple DAX queries and do whatever you want with the results, including loading them into an import mode semantic model.
Or you could refactor the source models to store results in a database, and build all the semantic models from the that database.