- USER LEVEL THREADS Aadvantages: · User-level threads can be implemented on operating system that does not support threads. · Implementing user-level threads does not require modification of operating system where everything is managed by the thread library · Simple representation which the thread is represented by a the thread ID, program counter, register, stack , all stored in user process address space · Simple management where creating new threads, switching threads and synchronization between threads can be done without intervention of the kernel · Fast and efficient where switching thread is much more inexpensive compared to a system call - Disadvantages: · There is a lack of coordination between threads and operating system kernel. A process gets one time slice no matter it has 1 thread or 10000 threads within it. It is up to the thread itself to give up the control to other threads · If one thread made a blocking system call, the entire process can be blocked in the kernel, even if other threads in the same process are in the ready state KERNEL LEVEL THREAD: - Advantages: · Because kernel has the full knowledge of all the threads, scheduler may decide to allocate more time to a process having large number of threads than process having small number of thread, where the kernel threads come useful for intense application - Disadvantages: · Kernel level threads are slow and inefficient, since kernel must manage and schedule all the threads as well as the processes. It requires a full TCB for each thread to maintain information about threads, which results in increasing of overheads and kernel complexity
It all depends on the purpose of the staging area. For example an Emergency Response Staging area in an office building may have first aid equipment and walkie-talkies. All resources in the staging area are available and should be ready for assignment.
So that CPU utilise all the resources of OS
Foreground process has access to the terminal standard i/osBackground process typically run with little or no user interaction at all, they interact with the system.
The major function of an operating system is to manage all resources of a system.
There r some resources shared by different threads o the same process while some r not. The threads shares the address space,file,global variables. But each threads has its own stack , copy of registers(including PC).
There r some resources shared by different threads o the same process while some r not. The threads shares the address space,file,global variables. But each threads has its own stack , copy of registers(including PC).
There r some resources shared by different threads o the same process while some r not. The threads shares the address space,file,global variables. But each threads has its own stack , copy of registers(including PC).
A process is composed of one or more threads of execution. Multiple threads allow a process to perform two or more operations concurrently. This is particularly useful in machines with two or more processors as the threads can execute simultaneously. All the threads of a process run in a shared memory space; separate processes run in separate memory spaces. A process must have at least one thread, the primary thread. However, threads can spawn new threads as required. Each thread has its own call stack but shares the same data segment and virtual address space as the process.
When a thread is created the threads does not require any new resources to execute the thread shares the resources like memory of the process to which they belong to. The benefit of code sharing is that it allows an application to have several different threads of activity all within the same address space. Whereas if a new process creation is very heavyweight because it always requires new address space to be created and even if they share the memory then the inter process communication is expensive when compared to the communication between the threads
In operating systems, a child process is a new process created by an existing process, which operates independently and has its own memory space. Whereas a thread is a subset of a process, sharing the same memory space and resources as the parent process. Threads are lighter weight and more efficient compared to processes in terms of resource utilization.
No, a deadlock by definition involves two or more processes waiting for each other to release resources. In the case of a single process, there is no contention for resources with other processes that could lead to a deadlock.
- USER LEVEL THREADS Aadvantages: · User-level threads can be implemented on operating system that does not support threads. · Implementing user-level threads does not require modification of operating system where everything is managed by the thread library · Simple representation which the thread is represented by a the thread ID, program counter, register, stack , all stored in user process address space · Simple management where creating new threads, switching threads and synchronization between threads can be done without intervention of the kernel · Fast and efficient where switching thread is much more inexpensive compared to a system call - Disadvantages: · There is a lack of coordination between threads and operating system kernel. A process gets one time slice no matter it has 1 thread or 10000 threads within it. It is up to the thread itself to give up the control to other threads · If one thread made a blocking system call, the entire process can be blocked in the kernel, even if other threads in the same process are in the ready state KERNEL LEVEL THREAD: - Advantages: · Because kernel has the full knowledge of all the threads, scheduler may decide to allocate more time to a process having large number of threads than process having small number of thread, where the kernel threads come useful for intense application - Disadvantages: · Kernel level threads are slow and inefficient, since kernel must manage and schedule all the threads as well as the processes. It requires a full TCB for each thread to maintain information about threads, which results in increasing of overheads and kernel complexity
A deadlock usually occurs when there are multiple threads running. Let us say there are 3 threads A, B and C running.A is holding resources X and is currently for the resource Y to complete the operation. B is holding resources Y and is waiting for resource Z to complete. C is holding Z and is waiting for X to complete. This is called a deadlock, All the 3 threads are waiting on some resources that are being held by other waiting threads. This causes an indefinite waiting which is termed as a Deadlock
A major benefit of multi-threading in computer operating systems is that the processor and other system resources are utilized to the maximum. With single-threading, system resources may remain for periods of time.
That is called a common-pool resource, where resources are collectively owned and accessible to all members of a community. Examples include air and water.
A thread is the sequence of instructions followed by a CPU, and is an independently dispachable unit in the run queue. A process can start and manage multiple threads, each managing an aspect of the overall processing. The operating system can schedule the threads independently, allowing them CPU time if they are ready, or blocking them if they are waiting on something, such as an IO completion. In a network process, such as a web server, there can be many things going on at the same concurrent time. Threads are an ideal solution to the problem of managing all of these things, because the main process does not need to poll each sub-process (thread) to see if it needs or is ready to do work.