I have an standard logic app with three workflows. One of them is a bigger workflow using multiple loops and dataverse connectors.
This workflow is due to the usage of loops and dataverse connectors quite slow, which is ok. But recently it started that my logic app needs to scale out. I noticed that the cpu and memeory utilisation is going extremely up.
Is there a way to find out which workflow action inside my workflow is causing this issue? Is there a log / log table where I can find out about how long each workflow steop took to execute and maybe how much memeory and cpu it consumes?
If you’re seeing high CPU and memory usage in Azure Logic App, in a large workflow with loops and Dataverse connector, Use Logic App’s built-in run history to track the duration of each workflow action in the run history:
In the Azure Portal:
Go to your Logic App and select Runs history
Click on a run → expand each action to see: Start time and Duration .
Refer this MSDOC to Monitor Logic Apps runs This lets you identify which actions are slow (potentially causing resource spikes).
Use Log Analytics performance counters to check CPU & memory usage
You already have performance counters (like Available Bytes for memory and % Processor Time for CPU), which you can visualize using Kusto queries.
Below is the example Kusto to view available memory over time**
performanceCounters
| where counter == "Available Bytes"
| summarize avg(value), min(value) by bin(timestamp, 1h)
| render timechart
Below is the example to compare CPU usage across hosts
performanceCounters
| where counter == "% Processor Time"
| summarize avg(value) by cloud_RoleInstance, bin(timestamp, 1h)
List available counters
performanceCounters
| summarize count(), avg(value) by category, instance, counter
Refer this MSDOC to Performance counters in Log Analytics