embeddedstm32i2cstm32cubemx

STM32CubeMX I2C code writing to reserved register bits


I'm developing an I2C driver on the STM32F74 family processors. I'm using the STM32CubeMX Low Level drivers and I can't make sense of the generated defines for I2C start and stop register values (CR2).

The code is generated in stm32f7xx_ll_i2c.h and is as follows.

/** @defgroup I2C_LL_EC_GENERATE Start And Stop Generation
 * @{
 */
 #define LL_I2C_GENERATE_NOSTARTSTOP         0x00000000U
 /*!< Don't Generate Stop and Start condition. */
 #define LL_I2C_GENERATE_STOP                (uint32_t)(0x80000000U | I2C_CR2_STOP)
 /*!< Generate Stop condition (Size should be set to 0).      */
 #define LL_I2C_GENERATE_START_READ          (uint32_t)(0x80000000U | I2C_CR2_START | I2C_CR2_RD_WRN)
 /*!< Generate Start for read request. */

My question is why is bit 31 included in these defines? (0x80000000U). The reference manual (RM0385) states "Bits 31:27 Reserved, must be kept at reset value.". I can't decide between modifying the generated code or keeping the 31 bit. I'll happily take recommendations simply whether its more likely that this is something needed or that I'm going to break things by writing to a reserved bit. Image of register bit map (I2C_CR2) from RM0385

Thanks in advance!


Solution

  • I am guessing here because who knows what was on the minds of the library authors? (Not a lot if you look at the source code!). But I would guess that it is a "dirty-trick" to check that when calling LL functions you are using the specified macros.

    However it is severely flawed because the "trick" is only valid for Cortex-M3/4 STM32 variants (e.g. F1xx, F2xx, F4xx) where the I2C peripheral is very different and registers such as I2C_CR2 are only 15 bits wide.

    The trick is that the library functions have parameter checking asserts such as:

    assert_param(IS_TRANSFER_REQUEST(Request));
    

    Where the IS_TRANSFER_REQUEST is defined thus:

    #define IS_TRANSFER_REQUEST(REQUEST)    (((REQUEST) == I2C_GENERATE_STOP)        || \
                                             ((REQUEST) == I2C_GENERATE_START_READ)  || \
                                             ((REQUEST) == I2C_GENERATE_START_WRITE) || \
                                             ((REQUEST) == I2C_NO_STARTSTOP))
    

    This forces you to use the LL defined macros as parameters and not some self-defined or calculated mask because they all have that "unused" check bit in them.

    If that truly is the the reason, it is an ill-advised practice that did not envisage the newer I2C peripheral. You might think that the bit was stripped from the parameter before being written to the register. I have checked, it is not. And if did you would be paying for that overhead on every call, which is also undesirable.

    As an error detection technique if that is what it is, it is not even applied consistently; for example all the GPIO_PIN_xx macros are 16 bits wide and since they are masks not pin numbers, using bit 31 could for example guard against passing a literal pin-number 10 where the mask 1<<10 is in fact required. Passing 10 would refer to pins 3 and 1 not 10. And to be honest that mistake is far more likely than, passing an incorrect I2C transfer request type.

    In the end however "Reserved" generally means "unused but may be used in future implementations", and requiring you to use the "reset value" is a way of ensuring forward binary compatibility. If you had such a device no doubt there would be a corresponding library update to support it - but it would require re-compilation of the code. The risk is low and probably only a problem if you attempt to run this binary on a newer incompatible part that used this bits.