CPU Scheduling Assignment Help
A multiprogramming operating system allows more than one process to be executed. The CPU scheduler is the part of the Operating System that determines when to allow each thread to execute. Operating sysem is itself is implemented as one or more processes. So there must be some way for operating system and application processes to share CPU. So there is a need of multiprogramming.However if you are willing to know more about it, it is wise that you get Cpu Scheduling Assignment Help from the experts who provide Computer Science Assignment Help and carries the expertise to deliver you with the best knowledge.
Goals of Scheduling
Efficiency: CPU utilization should be 100%
Throughput: Maximize the number of jobs proceesed per hour.
Waiting Time: Minimize the total amount to time spent in waiting in a queue.
Response time: Minimize the time from the submission till the first response is produced.
Preemptive Vs. Non-preemptive Scheduling
Non-Preemptive: Non-preemptive algorithms are designed so that once a process enters the running state, it is not removed from the processor until it has completed its service time.
Preemptive: The process with the highest priority should always be the one currently using the processor. If a process is currently using the processor and a new process with a higher priority enters, the ready list, the process on the processor should be removed and returned to the ready list until it is once again the highest-priority process in the system.
According to the experts who provide CPU Scheduling Assignment Help, it is a procedure which enables one procedure to utilize the CPU while the execution of another procedure is on hold(in holding up state) because of inaccessibility of any asset like I/O and so forth, in this way making full utilization of CPU. The point of CPU planning is to make the framework effective, quick and reasonable
First In First Out (FIFO)
This is a Non-Premptive scheduling algorithm. FIFO strategy assigns priority to processes in the order in which they request the processor.The process that requests the CPU first is allocated the CPU first. When a process comes in, add it enters at the tail of ready queue. When running process terminates, dequeue the process at head of ready queue and run it.
But it ignores the service time request and all other criteria that may influence the performance with respect to turnaround or waiting time.
This algorithm divides processing time equally in all processes. Run process for one time slice, then move to back of queue. Each process gets equal share of the CPU.
Priority Based Scheduling
Run highest-priority processes first, use round-robin among processes of equal priority. Re-insert process in run queue behind all processes of greater or equal priority.
But problem with this is that it may cause low-priority processes to starve.
Shortest Job First
Maintain the Ready queue in order of increasing job lengths. When a job comes in, insert it in the ready queue based on its length. When current process is done, pick the one at the head of the queue and run it.