linuxdriverpcinvme

difference between core.c and pci.c in Linux nvme driver


I want to learn how nvme driver works in Linux,

So I look into nvme driver source code here

what confuses me is that there are two source file containing "module_init()"

core.c

module_init(nvme_core_init);

and pci.c

module_init(nvme_init);

I know the module_init() function is the entry of the driver

but how come there are two entry in nvme driver ?


Solution

  • module_init() is the entry point of a module, and with abstraction we layer the modules to logically separate functionality, improve code reuse, etc...

    This is a common idiom throughout the kernel, and is done so that if an NVMe device became accessible via another bus, then core.c would be reused with no / minimal changes, and new_bus.c would be written to interface between the two.


    If you're using NVMe over PCIe, then the following chain will hopefully help things make sense:

    1. pci.c implements nvme_pci_reg_read32()
    2. pci.c registers nvme_pci_reg_read32() in the nvme_ctrl_ops structure, named nvme_pci_ctrl_ops
    3. core.c implements nvme_init_ctrl(), which is called with a pointer to one of these structures
    4. core.c keeps a reference to the structure
    5. core.c implements nvme_init_identify(), which needs assistance of the lower-level - pci.c
    6. core.c calls pci.c's nvme_pci_reg_read32() via the reference retained above

    If we were to develop a new bus that could support an NVMe device, then we could swap out pci.c for new_bus.c with no changes to core.c (as mentioned above).


    It's also worth checking out the Kconfig files as they can hint at things like this - though there is a certain amount of mental gymnastics to tie the source files to the menu options via the Makefiles.