Hypervisors isolate different OS running on the same physical machine from each other. Within this definition, non-volatile memory (like hard-drives or flash) separation exists as well.
When thinking on Type-2 hypervisors, it is easy to understand how they separate non-volatile memory because they just use the file system implementation of the underlying OS to allocate different "hard-drive files" to each VM.
But than, when i come to think about Type-1 hypervisors, the problem becomes harder. They can use IOMMU to isolate different hardware interfaces, but in the case of just one non-volatile memory interface in the system I don't see how it helps.
So one way to implement it will be to separate one device into 2 "partitions", and make the hypervisor interpret calls from the VMs and decide whether the calls are legit or not. I'm not keen on communication protocols to non-volatile interfaces but the hypervisor will have to be be familiar with those protocols in order to make the verdict, which sounds (maybe) like an overkill.
Are there other ways to implement this kind of isolation?
Yes you are right, hypervisor will have to be be familiar with those protocols in order to make the isolation possible. The overhead is mostly dependent on protocol. Like NVMe based SSD basically works on PCIe and some NVMe devices support SR-IOV which greatly reduces the effort but some don't leaving the burden on the hyperviosr.
Mostly this support is configured at build time, like how much memory will be given to each guest, command privilege for each guest etc, and when a guest send a command, hypervisor verifies its bounds and forwards them accordingly.
So why there is not any support like MMU or IOMMU in this case? There are hundreds of types of such devices with different protocols, NVMe, AHCI etc, and if the vendor came to support these to allow better virtualization, he will end up with a huge chip that's not gonna fit in.