I'm aware that in most modern architectures the CPU sends read and write requests, to a memory management unit rather than directly to the RAM controller.
If other peripherals are also addressed, that is to say, read from and written to using an address bus, then are these addresses also accessed through a virtual address? In other words, to speak to a USB drive etc. does the CPU send the target virtual address to an MMU which translates it to a physical one? Or does it simply write to a physical address with no intermediary device?
I cant speak globally there may be exceptions. But that is the general idea, that being that the cpu memory interface goes completely through the mmu (And completely through a cache or layers of caches).
In order for peripherals really to work (caching a status register on the first read then subsequent reads getting the cached version not the real version) you have to set the address space for the peripheral to be not cached. So for example on an arm and no doubt others where you have separate i and d cache enables, you can turn on the i cache without the mmu, but to turn on the d cache and not have this peripheral problem you need the mmu on and the peripheral space in the tables and marked as not cached.
It us up to the software designers to decide if they want to have the virtual address for the peripherals match the physical or to move the peripherals elsewhere, both have pros and cons.
It is certainly possible to design a chip/system where an address space is automatically not sent through the mmu or cache, that can make the busses ugly, and/or the chip may have separate busses for peripherals from ram, or other solutions, so the above is not necessarily a universal answer, but for say an arm and I would assume an x86 that is how it works. On the arms I am familar with the mmu and l1 cache are in the core, the l2 is outside and l3 beyond that if you have one. the l2 is literally between the core and the world (if you have one (from arm)) but the axi/amba bus has cacheable settings so each transaction may or may not be marked as cacheable, if not cacheable then it passes right through the l2 logic. if enabled the mmu determines that if enabled on a per transaction basis.