I find that during TLB missing process, some architecture use hardware to handle it while some use the OS. But when it comes to page fault, most of them use the OS instead of hardware.
I tried to find the answer but didn't find any article explains why.
Could anyone help with this? Thanks.
If the hardware could handle it on its own, it wouldn't need to fault.
The whole point is that the OS hasn't wired the page into the hardware page tables, e.g. because it's not actually in memory at all, or because the OS needs to catch an attempt to write so the OS can implement copy-on-write.
Page faults come in three categories:
The hardware doesn't know which is which, all it knows was that a page walk didn't find a valid page-table entry for that virtual address, so it's time to let the OS decide what to do next. (i.e. raise a page-fault exception which runs the OS's page-fault handler.) valid/invalid are purely software/OS concepts.
These example reasons are not an exhaustive list. e.g. an OS might remove the hardware mapping for a page without actually paging it out, just to see if the process touches it again soon. (In which case it's just a cheap soft page fault. But if not, then it might actually page it out to disk. Or drop it if it's clean.)
For HW to be able to fully handle a page fault, we'd need data structures with a hardware-specified layout that somehow lets hardware know what to do in some possible situations. Unless you build a whole kernel into the CPU microcode, it's not possible to have it handle every page fault, especially not invalid ones which require reading the OS's process / task-management data structures and delivering a signal to user-space. Either to a signal handler if there is one, or killing the process.
And especially not hard page faults, where a multi-tasking OS will let some other process run while waiting for the disk to DMA the page(s) into memory, before wiring up the page tables for this process and letting it retry the faulting load or store instruction.