The book Algorithms Illuminated Part 4 has the following definition to prove problems NP-hard:
A problem A reduces to another problem B if an algorithm that solves B can be easily translated into one that solves A. When discussing NP-hard problems, “easily translate” means that problem A can be solved using at most a polynomial (in the input size) number of invocations of a subroutine that solves problem B, along with a polynomial amount of additional work (outside of the subroutine calls).
This definition diverges from the definition of reductions used elsewhere, such as CLRS or Wikipedia. The author uses Turing reductions instead of Karp reductions.
My question is: Why? Does the usage of the type of Turing reduction keep the set of problems in NP-hard exactly the same? Intuitively, I don't see why this would not be the case. Turing reductions match intuitively with what should be a valid reduction. (Poly-time algorithm for B implies poly-time algorithm for A if A <= B).
NP-completeness is defined in terms of many-one reductions, but for NP-hardness, many-one reductions don't quite work. You can only many-one reduce a decision problem to another decision problem, but we want to be able to define NP-hardness for problems that aren't decision problems.
This does mean losing distinctions like NP vs. co-NP. NP-complete and co-NP-complete are different notions, but if we use Turing reductions to define NP-hardness, we don't have a distinction between NP-hard and "co-NP-hard".