I am currently studying operating system and I have difficulties understanding some parts about Implementing Multi-Threaded Processes
.
To be specific, the textbook explains that we can implement user-level threads by the system called Per-Processor Kernel Threads
. The detailed explanation about this system is below:
When the application starts up, the user-level thread library creates one kernel thread for each processor on the host machine. As long as there is no other activity on the system, the kernel will assign each of these threads a processor. Each kernel thread executes the user-level scheduler in parallel: pull the next thread off the user-level ready list, and run it. Because thread scheduling decisions occur at user level, they can be flexible and application specific.
However, next, it also mentions there are some downsides of this system, which is similar to that of green threads. Some downsides mentioned are below
Any time a user-level thread calls into the kernel, its host kernel thread blocks. This prevents the thread library from running a different user-level thread on that processor in the meantime.
Any time the kernel time-slices a kernel thread, the user-level thread it was running is also suspended. Thi library cannot resume that thread until the kernel thread resumes.
I cannot totally understand both of them. Here are my questions.
kernel time-slices a kernel thread
mean?Thanks.
What you are describing is sometimes referred to as the many-to-many model in textbooks with the added constraint that the number of kernel threads are limited to the number of processors.
I do not know of ANY operating system that implements threads this way. (If someone out there know of some non-academic Operating system that does threading this way, please enlighten me.) Such a system would be ridiculously convoluted to implement.
There is no real advantage whatsoever to user threads. Sadly, most Operating Systems textbooks are best used a cat box liner. Many insist on describing theoretical (but impracticable) advantages of user threads that simply do not exist in the real world. What is being described is running user threads on top of kernel threads.
This statement is just laughable:
Because thread scheduling decisions occur at user level, they can be flexible and application specific.
a) You are still going to have the [implicitly] inflexible kernel thread underneath.
b) The operating system's implementation of kernel threads would have to be completely inept if one could get better performance with user threads in this manner.
This is total BS:
Any time a user-level thread calls into the kernel, its host kernel thread blocks. This prevents the thread library from running a different user-level thread on that processor in the meantime.
a) There are non-blocking kernel calls.
b) Calling I/O system services in SOME retrograde operating systems blocks user threads. This is not true in all operating systems.
To your specific questions:
If a user-level thread calls into the kernel, why host kernel thread blocks in this system?
See above. It only happens for some system calls in some poorly designed operating systems.
What does kernel time-slices a kernel thread mean?
Yes, that's poor English. In thread scheduling a process is may be given a fixed amount of time ("time slice" or "quantum") that it can execute before the scheduler kicks in to see if another thread should be given a turn to execute. If you have a thread that does long calculations without doing I/O, this time limit prevents that thread from hogging the system.
Why this is a specific drawback of this thread model is beyond me. The same thing happens in pure kernel threads. Or pure user threads.
Sadly, what you have here is a book that is taking simple concepts and making them overly complicated. You have my sympathies.