I have some doubts about how two's complement is implemented in CPU.
Does the CPU always work with two's complement?. If not, how it knows when to apply it for substraction.
From my point of view, I though that CPU operate with bits and that one´s complement nor two´s complement is used, but if so, how it can do subtraction?
If CPU uses both two´s complement and normal binary, how it knows when to use two's complement? As I know if you use two's complement you use 1 bit for signing so if I want to do a subtraction in 8 bits for example:
255 + (-255)
You cannot realize this operation, as you cannot represent -255 with 8 bits, as the MSB it's used for signing and the maximum value will be -128, so you will need a extra bit.
I think that i'm mixing things up.
Thanks in advance!
Much of computing is done with unsigned numbers. Unsigned numbers are in some sense probably the most primitive form of numeric representation. In the unsigned form each bit represents a factor (1 or 0) on the power of 2 for its position in the form. The unsigned form cannot support negative numbers.
Addressing and addressing arithmetic are extremely common operation for both code and data, and all addressing is unsigned; there is no notion of negative memory addresses.
Does the CPU always work with two's complement?. If not, how it knows when to apply it for substraction.
It doesn't have to know, and, this is the beauty of the 2's complement form! Addition, subtraction of unsigned numbers works the same for signed numbers, so hardware designs don't generally offer separate add/sub for signed vs. unsigned, for example.
If the processor supported 1's complement numbers, there would have to be separate instructions for working with them vs. working with unsigned numbers.
In fact, this is how it works for floating point vs. integer. Separate instructions tell the processor to view some data as IEEE floating point format (which happens to use sign-magnitude instead of 2's complement).
Deciding whether to treat an integer number as signed or unsigned is a feature of the program, passed from programmer into the C or other language program, and then passed by the compiler into the machine code of the program.
So, an answer to how does the processor know is that it gets told by the program, and the machine code program gets told by the C (or other) program.
though that CPU operate bits in their natural states, but if so, how it can do complex mathematical operations?
The bits are just storage. The processor doesn't know or care if some bits are/were 2's complement or unsigned or other.
Circuitry interprets bits at every machine code instruction, and it is not difficult to build many kinds of circuits for many kinds of arithmetic & logic.
The program chooses what circuits to use by selecting appropriate machine code instruction sequences, and the processor happily executes the provided machine code sequences.
Let's note that in C we declare (integer) variables with a size and sign. So, we might have one variable that is byte-sized and unsigned, and another that is 32-bits wide and signed. In a programming language like C, once we declare a variable its data type doesn't change and that variable has that size and sign for the duration of the program.
The processor, however, doesn't read data declarations — it only reads machine code instructions. So, every time that unsigned byte-sized variable is use by the program, the machine code for it will tell the processor that data type information; similar for the signed 32-bit integer. One of the compiler's jobs is to ensure consistency of the treatment of variables at every read or write of them.
In machine code there is no enforcement for consistency and this is one of the reasons that assembly language is more error prone — there's no compiler ensuring consistency. The processor checks for certain errors (like null pointer dereference), but it doesn't care if you have bugs, like treating one piece of storage as a unsigned byte one minute and signed 32-bits the next. The processor doesn't care if your program makes sense, it just runs each machine code instruction according to its instruction set specifications.