I can retrieve the number of ticks needed to reach, say, 500 ms using either:
const TickType_t xDelay = 500 / portTICK_PERIOD_MS
or:
const TickType_t xDelay = pdMS_TO_TICKS(500);
where:
#define portTICK_PERIOD_MS ( ( TickType_t ) 1000 / configTICK_RATE_HZ )
#define pdMS_TO_TICKS( xTimeInMs ) ( ( TickType_t ) ( ( ( TickType_t ) ( xTimeInMs ) * ( TickType_t ) configTICK_RATE_HZ ) / ( TickType_t ) 1000U ) )
To me, both #define
seem to do the same thing. Is there any difference about performance or other stuff?
Using pd_MS_TO_TICKS
is a bit cleaner coding style. It exposes a little less of the 'internal' details of FreeRTOS and helps avoid mistakes like accidentally doing portTICK_PERIOD_MS * x
where the OP in a recent ESP32 forum question meant to use x / portTICK_PERIOD_MS
.