Isnin, 14 Februari 2011

Round Robin & Shortest Job First

Round-robin
Round-robin (RR) is one of the simplest scheduling algorithms for processes in an operating system,which assigns time slices to each process in equal portions and in circular order, handling all processes without priority (also known as cyclic executive). Round-robin scheduling is both simple and easy to implement, and starvation-free. Round-robin scheduling can also be applied to other scheduling problems, such as data packet scheduling in computer networks.
The name of the algorithm comes from the round-robin principle known from other fields, where each person takes an equal share of something in turn.
§  Each process gets a small unit of CPU time (time quantum),
usually 10-100 milliseconds. After this time has elapsed, the
process is preempted and added to the end of the ready queue.
§  If there are n processes in the ready queue and the time
quantum is q, then each process gets 1/n of the CPU time in
chunks of at most q time units at once. No process waits more
than (n-1)q time units.
§  Performance
®    q large Þ FIFO
®    q small Þ q must be large with respect to context switch,otherwise overhead is too high


Process scheduling
Round-robin job scheduling may not be desirable if the sizes of the jobs or tasks are highly variable. A process that produces large jobs would be favoured over other processes. This problem may be solved by time-sharing, i.e. by giving each job a time slot or quantum (its allowance of CPU time), and interrupt the job if it is not completed by then. The job is resumed next time a time slot is assigned to that process.
Example: The time slot could be 100 milliseconds. If job1 takes a total time of 250ms to complete, the round-robin scheduler will suspend the job after 100ms and give other jobs their time on the CPU. Once the other jobs have had their equal share (100ms each), job1 will get another allocation of CPU time and the cycle will repeat. This process continues until the job finishes and needs no more time on the CPU.




Data packet scheduling
§  In best-effort packet switching and other statistical multiplexing, round-robin scheduling can be used as an alternative to first-come first-served queuing.
§  A multiplexer, switch, or router that provides round-robin scheduling has a separate queue for every data flow, where a data flow may be identified by its source and destination address. The algorithm lets every active data flow (that has data packets in the queue) to take turns in transferring packets on a shared channel in a periodically repeated order. The scheduling is work-conserving, meaning that if one flow is out of packets, next data flow will take its place (the scheduling tries to avoid link resources to go unused).

§  Round-robin scheduling results in max-min fairness if the data packets are equally sized, since the data flow that has waited the longest time is given scheduling priority.


§  Round-robin scheduling may not be desirable if the sizes of the jobs or tasks are strongly varying. A user that produces large jobs would be favored over other users. In that case fair queuing would be preferable.


§  If guaranteed or differentiated quality of service is offered, and not only best-effort communication, deficit round-robin (DRR) scheduling, weighted round-robin (WRR) scheduling, or weighted fair queuing (WFQ) may be considered.


§  In multiple-access networks, where several terminals are connected to a shared physical medium, round-robin scheduling may be provided by Token passing channel access schemes such as token ring, as well as by polling or resource reservation from a central control station.


§  In a centralized wireless packet radio network, where many stations share one frequency channel, a scheduling algorithm in a central base station may reserve time slots for the mobile stations in a round-robin fashion and provide fairness. However, if link adaptation is used, it will take a much longer time to transmit a certain amount of data to "expensive" users than to others since the channel conditions differ. It would be more efficient to wait with the transmission until the channel conditions are improved, or at least to give scheduling priority to less expensive users. Round-robin scheduling does not utilize this. Higher throughput and system spectrum efficiency may be achieved by channel-dependent scheduling, for example a proportionally fair algorithm, or maximum throughput scheduling. Note that the latter is characterized by undesirable scheduling starvation.


§  Round-Robin Scheduling in UNIX: this can also be the same concept of round-robin scheduler, and it can be created by using semaphores.


Shortest-Job-First (SJF)
*      Shortest-Job-First (SJF) is a non-preemptive discipline in which waiting job (or process) with the smallest estimated run-time-to-completion is run next. In other words, when CPU is available, it is assigned to the process that has smallest next CPU burst.

*      The SJF scheduling is especially appropriate for batch jobs for which the run times are known in advance. Since the SJF scheduling algorithm gives the minimum average time for  a given set of processes, it is probably optimal

*      The SJF algorithm favors short jobs (or processors) at the expense of longer ones

*      The obvious problem with SJF scheme is that it requires precise knowledge of how long a job or process will run, and this information is not usually available.

*      Associate with each process the length of its next CPU burst. Use these lengths to schedule the process with the shortest time




*      Two schemes:
®    nonpreemptive – once CPU given to the process it cannot be preempted until completes its CPU burst
®    preemptive – if a new process arrives with CPU burst length
less than remaining time of current executing process,
preempt. This scheme is know as the
Shortest-Remaining-Time-First (SRTF)

*      SJF is optimal – gives minimum average waiting time for a given set of processes


Process                                              Burst Time
    P1                                                         6
    P2                                                         8
    P3                                                         7
    P4                                                         3



§  Using SJF scheduling, the processes P1, P2, P3, and P4 will be
schedule as shown in Gantt chart:


P4
P1
P3
P4
                                                                                                          16                                  24

§  The wait time for P4 = 0, P1 = 3, P3 = 9, P2 = 16. Thus, the average waiting time is (3 + 9 + 16)/4 = 7 m.sec.
§  For same case, the average wait time in FCFS scheduling will be 10.25m.sec.
§  The difficulty with SJF algorithm is that how to determine the size of next CPU burst, which is generally predicted as an exponential average of the measured length of previous CPU bursts.
§  This SJF algorithm can be non pre-emptive or pre-emptive.
§  A pre-emptive priority scheduling algorithm will pre-empt the
§  CPU if the priority of the newly arrived process is higher then the
currently running process.
§  The limitation of this algorithm is that in a heavily loaded system, a steady stream of high priority processes can prevent a
low priority process from ever getting the CPU.
§  A solution to this problem of starvation of low priority request to incorporate in additional parameter, aging which mean the priority of a task will be increased if it becoming older.


Process Management
Process management is the ensemble of activities of planning and monitoring the performance of a process. The term usually refers to the management of business processes and manufacturing processes. Business process management (BPM) and business process reengineering are interrelated, but not identical.
Process management is the application of knowledge, skills, tools, techniques and systems to define, visualize, measure, control, report and improve processes with the goal to meet customer requirements profitably. It can be differentiated from program management in that program management is concerned with managing a group of inter-dependent projects. But from another viewpoint, process management includes program management. In project management, process management is the use of a repeatable process to improve the outcome of the project.

Process (Computing)
In computing, a process is an instance of a computer program that is being executed. It contains the program code and its current activity. Depending on the operating system (OS), a process may be made up of multiple threads of execution that execute instructions concurrently.
A computer program is a passive collection of instructions, a process is the actual execution of those instructions. Several processes may be associated with the same program; for example, opening up several instances of the same program often means more than one process is being executed.
Multitasking is a method to allow multiple processes to share processors (CPUs) and other system resources. Each CPU executes a single task at a time. However, multitasking allows each processor to switch between tasks that are being executed without having to wait for each task to finish. Depending on the operating system implementation, switches could be performed when tasks perform input/output operations, when a task indicates that it can be switched, or on hardware interrupts.
A common form of multitasking is time-sharing. Time-sharing is a method to allow fast response for interactive user applications. In time-sharing systems, context switches are performed rapidly. This makes it seem like multiple processes are being executed simultaneously on the same processor. The execution of multiple processes seemingly simultaneously is called concurrency.
For security and reliability reasons most modern operating systems prevent direct communication between independent processes, providing strictly mediated and controlled inter-process communication functionality.
The operating system holds most of this information about active processes in data structures called process control blocks
Any subset of resource, but typically at least the processor state, may be associated with each of the process' threads in operating systems that support threads or 'daughter' processes.
The operating system keeps its processes separated and allocates the resources they need so that they are less likely to interfere with each other and cause system failures (e.g., deadlock or thrashing). The operating system may also provide mechanisms for inter-process communication to enable processes to interact in safe and predictable ways.

Process Management (Computing)
A multitasking* operating system may just switch between processes to give the appearance of many processes executing concurrently or simultaneously, though in fact only one process can be executing at any one time on a single-core CPU (unless using multi-threading or other similar technology).
It is usual to associate a single process with a main program, and 'daughter' ('child') processes with any spin-off, parallel processes, which behave like asynchronous subroutines. A process is said to own resources, of which an image of its program (in memory) is one such resource. (Note, however, that in multiprocessing systems, many processes may run off of, or share, the same reentrant program at the same location in memory but each process is said to own its own image of the program.)
Processes are often called tasks in embedded operating systems. The sense of 'process' (or task) is 'something that takes up time', as opposed to 'memory', which is 'something that takes up space'. (Historically, the terms 'task' and 'process' were used interchangeably, but the term 'task' seems to be dropping from the computer lexicon.)
The above description applies to both processes managed by an operating system, and processes as defined by process calculi.
If a process requests something for which it must wait, it will be blocked. When the process is in the Blocked State, it is eligible for swapping to disk, but this is transparent in a virtual memory system, where blocks of memory values may be really on disk and not in main memory at any time. Note that even unused portions of active processes/tasks (executing programs) are eligible for swapping to disk. All parts of an executing program and its data do not have to be in physical memory for the associated process to be active.

 Process States
The various process states, displayed in a state diagram, with arrows indicating possible transitions between states.
An operating system kernel that allows multi-tasking needs processes to have certain states. Names for these states are not standardised, but they have similar functionality.
    * First, the process is "created" - it is loaded from a secondary storage device (hard disk or CD-ROM...) into main memory. After that the process scheduler assigns it the state "waiting".
    * While the process is "waiting" it waits for the scheduler to do a so-called context switch and load the process into the processor. The process state then becomes "running", and the processor executes the process instructions.
    * If a process needs to wait for a resource (wait for user input or file to open ...), it is assigned the "blocked" state. The process state is changed back to "waiting" when the process no longer needs to wait.

    * Once the process finishes execution, or is terminated by the operating system, it is no longer needed. The process is removed instantly or is moved to the "terminated" state. When removed, it just waits to be removed from main memory.
                                                                    
Inter-process communication

When processes communicate with each other it is called "Inter-process communication" (IPC). Processes frequently need to communicate, for instance in a shell pipeline, the output of the first process need to pass to the second one, and so on to the other process.It is preferred in a well-structured way not using interrupts.
It is even possible for the two processes to be running on different machines. The operating system (OS) may differ from one process to the other, therefore some mediator(s) (called protocols) are needed.

Program Management
Program management or programme management is the process of managing several related projects, often with the intention of improving an organization's performance. In practice and in its aims it is often closely related to systems engineering and industrial engineering.
There are two different views of how programs differ from projects.
On one view, projects deliver outputs, discrete parcels or "chunks" of change  programs create outcomes. On this view, a project might deliver a new factory, hospital or IT system. By combining these projects with other deliverables and changes, their programs might deliver increased income from a new product, shorter waiting lists at the hospital or reduced operating costs due to improved technology.
The other view  is that a program is nothing more than either a large project or a set (or portfolio) of projects. On this second view, the point of having a program is to exploit economies of scale and to reduce coordination costs and risks. The project manager's job is to ensure that their project succeeds. The program manager, on the other hand, may not care about individual projects, but is concerned with the aggregate result or end-state. For example, in a financial institution a program may include one project that is designed to take advantage of a rising market, and another to protect against the downside of a falling market. These projects are opposites with respect to their success conditions, but they fit together in the same program.
According to the view that programs deliver outcomes but projects deliver outputs, program management is concerned with doing the right projects. The program manager has been described as 'playing chess' and keeping the overview in mind. The pieces to be used or sacrificed being the projects.  Whereas project management is about doing projects right. And also according to this view, successful projects deliver on time, to budget and to specification, whereas successful programs deliver long term improvements to an organization. Improvements are usually identified through benefits. An organization should select the group of programs that most take it towards its strategic aims whilst remaining within its capacity to deliver the changes. On the other hand, the view that programs are simply large projects or a set of projects allows that a program may need to deliver tangible benefits quickly.
Consider the following set of projects:
    * design of the new product - this delivers a design specification,
    * modifications to the production line or factory - delivers a manufacturing capability,
    * marketing - delivers advertisements, brochures and pamphlets,
    * staff training - delivers staff trained to sell and support the new product.

One view has it that these are different projects within a program. But in practice they can just as well be managed as sub-projects within a single project. Which approach to choose? Program and project management are both practical disciplines, and the answer to such a question must be "whatever works." What works depends very much on the nature of the organization in which the project or program is run. Typically a program is broken down into projects that reflect the organization's structure. The design project will be run by the design team, the factory will manage the modifications to the production line, and so on. Organizational structure and organizational culture are key factors in how to structure a program.
The distinction between the terms "outcome" and "output" is far from clear, except in a trivial sense. Each of the projects listed in the example above is designed to deliver some 'thing', known as a 'deliverable' or an 'output', and together they improve the organization. Where one draws the line between the complete single benefit that causes the improvement and its component parts is partly a matter of preference and partly a matter of the culture and structure of the organization. Either way, benefits will normally be enjoyed long after the end of the program and all of its component projects. The point is that to achieve maximum benefits, there must be an integration of parts into a whole. Whether this integration is managed in something that is called a project or a program is of secondary importance to understanding the benefits and managing the process of integration well.
Many programs are concerned with delivering a capability to change. Only when that capability is transferred to the line management and utilized by the host organization will the benefits actually be delivered. On this view, a program team cannot, on their own, deliver benefits. Benefits can only be delivered through the utilization of a new capability.
Programs are normally designed to deliver the organization's strategy, such as an ambition to be the fourth biggest supermarket in a region by 2015 or reduce wastage by 5% in two year's time.
According to Project Management Institute (PMI), The Standard for Program Management, 2nd Ed., "A Program is a group of related projects managed in a coordinated manner to obtain benefits and control NOT available from managing them individually. Programs may include elements of related work outside of the scope of the discreet projects in the program... Some projects within a program can deliver useful incremental benefits to the organization before the program itself has completed."
Program management also emphasizes the coordinating and prioritizing of resources across projects, managing links between the projects and the overall costs and risks of the program.
Program management may provide a layer above the management of projects and focuses on selecting the best group of projects, defining them in terms of their objectives and providing an environment where projects can be run successfully. Program managers should not micromanage, but should leave project management to the project managers.
Many organizations only run one program at a time, a program containing all their projects. In Project Management Institute terminology, this is more likely to be a project portfolio than a program. Some larger organizations may have multiple programs each designed to deliver a range of improvements. Some organizations use the concept of Systems Engineering where others use program management.

The Processor
The processor tells your computer what to do and when to do it, it decides which tasks are more important and prioritizes them to your computers needs.










There is and has been many processors on the market, running at many different speeds. The speed is measured in Megahertz or MHz. A single MHz is a calculation of 1 million cycles per second (or computer instructions), so if you have a processor running at 2000 MHz, then your computer is running at 2000,000,000 cycles per second, which in more basic terms is the amount of instructions your computer can carry out. Another important abbreviation is Gigahertz or GHz. A single GHz or 1 GHz is the same as 1000 MHz . Sounds a bit confusing, so here is a simple conversion :
1000 MHz (Megahertz) = 1GHz (Gigahertz) = 1000,000,000 Cycles per second (or computer instructions).

Now you can see why they abbreviate it, could you imagine going to a PC store and asking for a one thousand million cycle PC please. A bit of a mouth full isn’t it?
So when buying a new computer always look for fastest you can afford. The fastest on the market at the time of writing this article is 3.8 GHz (3800 MHz). Remember though that it is not necessary to purchase such a fast processor, balance your needs, do you really need top of the range? Especially when the difference say between a 3.5 GHz (3500 MHz) and a 3.8 GHz (3800 MHz) processor will be barely noticed (if noticed at all) by you, while the price difference is around  £100. With the money you save you could get a nice printer and scanner package.
Now that we have covered the speeds, there is one more important subject to cover. Which processor? There are 3 competitors at present, the AMD Athlon, Intel Pentium and the Intel Celeron. They come in many guises, but basically the more cores they have and the higher the speed means better and faster.
Processors now come as dual core, triple core and quad core. These processors are the equivalent of running two cpu's (Dual core), three CPU's ( Triple core) or four (Quad core).

In the past Intel Pentium the best and most expensive of them all, and remains today one of the most popular on the market. In layman’s terms it is/was the designer processor, although AMD have some superb if not better releases and equally highly priced and advanced products. It would be hard to say which is best as they are direct competitors.
Lastly there is the Intel Celeron; this processor is a budget version of the Intel Pentium 4, the processor you find in most budget computers. If the purse is tight, and you need a computer, then this is your port of call. You will find many sub £400 computers fitted with this processor.

12 ulasan: