Scheduling
is a key concept in computer multitasking, multiprocessing operating system and real-time operating system designs. Scheduling refers to the way processes are assigned to run on the available CPUs, since there are typically many more processes running than there are available CPUs. This assignment is carried out by softwares known as a scheduler and dispatcher.
The scheduler is concerned mainly with:Throughput - number of processes that complete their execution per time unit.Latency, specifically:
Turnaround - total time between submission of a process and its completion.Response time - amount of time it takes from when a request was submitted until the first response is produced.
Fairness - Equal CPU time to each process (or more generally appropriate times according to each process' priority).
In practice, these goals often conflict (e.g. throughput versus latency), thus a scheduler will implement a suitable compromise.
In real-time environments, such as mobile devices for automatic control in industry (for example robotics), the scheduler also must ensure that processes can meet deadlines; this is crucial for keeping the system stable. Scheduled tasks are sent to mobile devices and managed through an administrative back end.
Types of operating system schedulers
Operating systems may feature up to 3 distinct types of a long-term scheduler (also known as an admission scheduler or high-level scheduler), a mid-term or medium-term scheduler and a short-term scheduler . The names suggest the relative frequency with which these functions are performed.
Long-term schedulerThe long-term, or admission, scheduler decides which jobs or processes are to be admitted to the ready queue; that is, when an attempt is made to execute a program, its admission to the set of currently executing processes is either authorized or delayed by the long-term scheduler. Thus, this scheduler dictates what processes are to run on a system, and the degree of concurrency to be supported at any one time - ie: whether a high or low amount of processes are to be executed concurrently, and how the split between IO intensive and CPU intensive processes is to be handled. In modern OS's, this is used to make sure that real time processes get enough CPU time to finish their tasks. Without proper real time scheduling, modern GUI interfaces would seem sluggish.
Long-term scheduling is also important in large-scale systems such as batch processing systems, computer clusters, supercomputers and render farms. In these cases, special purpose job scheduler software is typically used to assist these functions, in addition to any underlying admission scheduling support in the operating system.
Mid-term schedulerThe mid-term scheduler temporarily removes processes from main memory and places them on secondary memory (such as a disk drive) or vice versa. This is commonly referred to as "swapping out" or "swapping in" (also incorrectly as "paging out" or "paging in"). The mid-term scheduler may decide to swap out a process which has not been active for some time, or a process which has a low priority, or a process which is page faulting frequently, or a process which is taking up a large amount of memory in order to free up main memory for other processes, swapping the process back in later when more memory is available, or when the process has been unblocked and is no longer waiting for a resource
In many systems today (those that support mapping virtual address space to secondary storage other than the swap file), the mid-term scheduler may actually perform the role of the long-term scheduler, by treating binaries as "swapped out processes" upon their execution. In this way, when a segment of the binary is required it can be swapped in on demand, or "lazy loaded"
Short-term scheduler
The short-term scheduler (also known as the CPU scheduler) decides which of the ready, in-memory processes are to be executed (allocated a CPU) next following a clock interrupt, an IO interrupt, an operating system call or another form of signal. Thus the short-term scheduler makes scheduling decisions much more frequently than the long-term or mid-term schedulers - a scheduling decision will at a minimum have to be made after every time slice, and these are very short. This scheduler can be preemptive, implying that it is capable of forcibly removing processes from a CPU when it decides to allocate that CPU to another process, or non-preemptive (also known as "voluntary" or "co-operative"), in which case the scheduler is unable to "force" processes off the CPU. [Stallings, 396]. In most cases short-term scheduler is written in assembler because it is critical part of operating system.
Dispatcher
Another component involved in the CPU-scheduling function is the dispatcher. The dispatcher is the module that gives control of the CPU to the process selected by the short-term scheduler. This function involves the following:
Switching context
Switching to user mode
Jumping to the proper location in the user program to restart that program
The dispatcher should be as fast as possible, since it is invoked during every process switch. The time it takes for the dispatcher to stop one process and start another running is known as the dispatch latency
Scheduling disciplines
Scheduling disciplines are algorithms used for distributing resources among parties which simultaneously and asynchronously request them. Scheduling disciplines are used in routers (to handle packet traffic) as well as in operating systems (to share CPU time among both threads and processes), disk drives (I/O scheduling), printers (print spooler, most embedded systems, etc.
The main purposes of scheduling algorithms are to minimize resource starvation and to ensure fairness amongst the parties utilizing the resources. Scheduling deals with the problem of deciding which of the outstanding requests is to be allocated resources. There are many different scheduling algorithms. In this section, we introduce several of them.
First in first out
Also known as First Come, First Served (FCFS), is the simplest scheduling algorithm, FIFO simply queues processes in the order that they arrive in the ready queue.
Since context switches only occur upon process termination, and no reorganization of the process queue is required, scheduling overhead is minimal.
Throughput can be low, since long processes can hog the CPU
Turnaround time, waiting time and response time can be low for the same reasons above
No prioritization occurs, thus this system has trouble meeting process deadlines.
The lack of prioritization does permit every process to eventually complete, hence no starvation.
Shortest remaining time
Main article: Shortest remaining time
Also known as Shortest Job First (SJF). With this strategy the scheduler arranges processes with the least estimated processing time remaining to be next in the queue. This requires advance knowledge or estimations about the time required for a process to complete.
If a shorter process arrives during another process' execution, the currently running process may be interrupted, dividing that process into two separate computing blocks. This creates excess overhead through additional context switching. The scheduler must also place each incoming process into a specific place in the queue, creating additional overhead.
This algorithm is designed for maximum throughput in most scenarios.
Waiting time and response time increase as the process' computational requirements increase. Since turnaround time is based on waiting time plus processing time, longer processes are significantly affected by this. Overall waiting time is smaller than FIFO, however since no process has to wait for the termination of the longest process.
No particular attention is given to deadlines, the programmer can only attempt to make processes with deadlines as short as possible.
Starvation is possible, especially in a busy system with many small processes being run.
Fixed priority pre-emptive scheduling
Main article: Fixed priority pre-emptive scheduling
The O/S assigns a fixed priority rank to every process, and the scheduler arranges the processes in the ready queue in order of their priority. Lower priority processes get interrupted by incoming higher priority processes.
Overhead is not minimal, nor is it significant.
FPPS has no particular advantage in terms of throughput over FIFO scheduling.
Waiting time and response time depend on the priority of the process. Higher priority processes have smaller waiting and response times.
Deadlines can be met by giving processes with deadlines a higher priority.
Starvation of lower priority processes is possible with large amounts of high priority processes queuing for CPU time.
multilevel feedback queue
In computer science, a multilevel feedback queue is a scheduling algorithm. It is intended to meet the following design requirements for multimode systems:
Give preference to short jobs.
Give preference to I/O bound processes.
Quickly establish the nature of a process and schedule the process accordingly.
Multiple FIFO queues are used and the operation is as follows:
A new process is positioned at the end of the top-level FIFO queue.
At some stage the process reaches the head of the queue and is assigned the CPU.
If the process is completed it leaves the system.
If the process voluntarily relinquishes control it leaves the queuing network, and when the process becomes ready again it enters the system on the same queue level.
If the process uses all the quantum time, it is pre-empted and positioned at the end of the next lower level queue.
This will continue until the process completes or it reaches the base level queue.
· At the base level queue the processes circulate in round robin fashion until they complete and leave the system.
· Optionally, if a process blocks for I/O, it is 'promoted' one level, and placed at the end of the next-higher queue. This allows I/O bound processes to be favored by the scheduler and allows processes to 'escape' the base level queue.
In the multilevel feedback queue, a process is given just one chance to complete at a given queue level before it is forced down to a lower level
Isnin, 14 Februari 2011
Round Robin & Shortest Job First
Round-robin
Round-robin (RR) is one of the simplest scheduling algorithms for processes in an operating system,which assigns time slices to each process in equal portions and in circular order, handling all processes without priority (also known as cyclic executive). Round-robin scheduling is both simple and easy to implement, and starvation-free. Round-robin scheduling can also be applied to other scheduling problems, such as data packet scheduling in computer networks.
The name of the algorithm comes from the round-robin principle known from other fields, where each person takes an equal share of something in turn.
Process scheduling
Round-robin job scheduling may not be desirable if the sizes of the jobs or tasks are highly variable. A process that produces large jobs would be favoured over other processes. This problem may be solved by time-sharing, i.e. by giving each job a time slot or quantum (its allowance of CPU time), and interrupt the job if it is not completed by then. The job is resumed next time a time slot is assigned to that process.
Example: The time slot could be 100 milliseconds. If job1 takes a total time of 250ms to complete, the round-robin scheduler will suspend the job after 100ms and give other jobs their time on the CPU. Once the other jobs have had their equal share (100ms each), job1 will get another allocation of CPU time and the cycle will repeat. This process continues until the job finishes and needs no more time on the CPU.
Data packet scheduling
Shortest-Job-First (SJF)
Shortest-Job-First (SJF) is a non-preemptive discipline in which waiting job (or process) with the smallest estimated run-time-to-completion is run next. In other words, when CPU is available, it is assigned to the process that has smallest next CPU burst.
The SJF scheduling is especially appropriate for batch jobs for which the run times are known in advance. Since the SJF scheduling algorithm gives the minimum average time for a given set of processes, it is probably optimal
The SJF algorithm favors short jobs (or processors) at the expense of longer ones
The obvious problem with SJF scheme is that it requires precise knowledge of how long a job or process will run, and this information is not usually available.
Associate with each process the length of its next CPU burst. Use these lengths to schedule the process with the shortest time
Two schemes:
SJF is optimal – gives minimum average waiting time for a given set of processes
Round-robin (RR) is one of the simplest scheduling algorithms for processes in an operating system,which assigns time slices to each process in equal portions and in circular order, handling all processes without priority (also known as cyclic executive). Round-robin scheduling is both simple and easy to implement, and starvation-free. Round-robin scheduling can also be applied to other scheduling problems, such as data packet scheduling in computer networks.
The name of the algorithm comes from the round-robin principle known from other fields, where each person takes an equal share of something in turn.
§ Each process gets a small unit of CPU time (time quantum),
usually 10-100 milliseconds. After this time has elapsed, the
process is preempted and added to the end of the ready queue.
§ If there are n processes in the ready queue and the time
quantum is q, then each process gets 1/n of the CPU time in
chunks of at most q time units at once. No process waits more
than (n-1)q time units.
§ Performance
® q large Þ FIFO
® q small Þ q must be large with respect to context switch,otherwise overhead is too high
Process scheduling
Round-robin job scheduling may not be desirable if the sizes of the jobs or tasks are highly variable. A process that produces large jobs would be favoured over other processes. This problem may be solved by time-sharing, i.e. by giving each job a time slot or quantum (its allowance of CPU time), and interrupt the job if it is not completed by then. The job is resumed next time a time slot is assigned to that process.
Example: The time slot could be 100 milliseconds. If job1 takes a total time of 250ms to complete, the round-robin scheduler will suspend the job after 100ms and give other jobs their time on the CPU. Once the other jobs have had their equal share (100ms each), job1 will get another allocation of CPU time and the cycle will repeat. This process continues until the job finishes and needs no more time on the CPU.
Data packet scheduling
§ In best-effort packet switching and other statistical multiplexing, round-robin scheduling can be used as an alternative to first-come first-served queuing.
§ A multiplexer, switch, or router that provides round-robin scheduling has a separate queue for every data flow, where a data flow may be identified by its source and destination address. The algorithm lets every active data flow (that has data packets in the queue) to take turns in transferring packets on a shared channel in a periodically repeated order. The scheduling is work-conserving, meaning that if one flow is out of packets, next data flow will take its place (the scheduling tries to avoid link resources to go unused).
§ Round-robin scheduling results in max-min fairness if the data packets are equally sized, since the data flow that has waited the longest time is given scheduling priority.
§ Round-robin scheduling may not be desirable if the sizes of the jobs or tasks are strongly varying. A user that produces large jobs would be favored over other users. In that case fair queuing would be preferable.
§ If guaranteed or differentiated quality of service is offered, and not only best-effort communication, deficit round-robin (DRR) scheduling, weighted round-robin (WRR) scheduling, or weighted fair queuing (WFQ) may be considered.
§ In multiple-access networks, where several terminals are connected to a shared physical medium, round-robin scheduling may be provided by Token passing channel access schemes such as token ring, as well as by polling or resource reservation from a central control station.
§ In a centralized wireless packet radio network, where many stations share one frequency channel, a scheduling algorithm in a central base station may reserve time slots for the mobile stations in a round-robin fashion and provide fairness. However, if link adaptation is used, it will take a much longer time to transmit a certain amount of data to "expensive" users than to others since the channel conditions differ. It would be more efficient to wait with the transmission until the channel conditions are improved, or at least to give scheduling priority to less expensive users. Round-robin scheduling does not utilize this. Higher throughput and system spectrum efficiency may be achieved by channel-dependent scheduling, for example a proportionally fair algorithm, or maximum throughput scheduling. Note that the latter is characterized by undesirable scheduling starvation.
§ Round-Robin Scheduling in UNIX: this can also be the same concept of round-robin scheduler, and it can be created by using semaphores.
Shortest-Job-First (SJF)
® nonpreemptive – once CPU given to the process it cannot be preempted until completes its CPU burst
® preemptive – if a new process arrives with CPU burst length
less than remaining time of current executing process,
preempt. This scheme is know as the
Shortest-Remaining-Time-First (SRTF)
Process Burst Time
P1 6
P2 8
P3 7
P4 3
§ Using SJF scheduling, the processes P1, P2, P3, and P4 will be
schedule as shown in Gantt chart:
P4 | P1 | P3 | P4 |
0 3 9 16 24
§ The wait time for P4 = 0, P1 = 3, P3 = 9, P2 = 16. Thus, the average waiting time is (3 + 9 + 16)/4 = 7 m.sec.
§ For same case, the average wait time in FCFS scheduling will be 10.25m.sec.
§ The difficulty with SJF algorithm is that how to determine the size of next CPU burst, which is generally predicted as an exponential average of the measured length of previous CPU bursts.
§ This SJF algorithm can be non pre-emptive or pre-emptive.
§ A pre-emptive priority scheduling algorithm will pre-empt the
§ CPU if the priority of the newly arrived process is higher then the
currently running process.
§ The limitation of this algorithm is that in a heavily loaded system, a steady stream of high priority processes can prevent a
low priority process from ever getting the CPU.
§ A solution to this problem of starvation of low priority request to incorporate in additional parameter, aging which mean the priority of a task will be increased if it becoming older.
* Once the process finishes execution, or is terminated by the operating system, it is no longer needed. The process is removed instantly or is moved to the "terminated" state. When removed, it just waits to be removed from main memory.
1000 MHz (Megahertz) = 1GHz (Gigahertz) = 1000,000,000 Cycles per second (or computer instructions).
Now you can see why they abbreviate it, could you imagine going to a PC store and asking for a one thousand million cycle PC please. A bit of a mouth full isn’t it?
So when buying a new computer always look for fastest you can afford. The fastest on the market at the time of writing this article is 3.8 GHz (3800 MHz). Remember though that it is not necessary to purchase such a fast processor, balance your needs, do you really need top of the range? Especially when the difference say between a 3.5 GHz (3500 MHz) and a 3.8 GHz (3800 MHz) processor will be barely noticed (if noticed at all) by you, while the price difference is around £100. With the money you save you could get a nice printer and scanner package.
Now that we have covered the speeds, there is one more important subject to cover. Which processor? There are 3 competitors at present, the AMD Athlon, Intel Pentium and the Intel Celeron. They come in many guises, but basically the more cores they have and the higher the speed means better and faster.
Processors now come as dual core, triple core and quad core. These processors are the equivalent of running two cpu's (Dual core), three CPU's ( Triple core) or four (Quad core).
Process Management
Process management is the ensemble of activities of planning and monitoring the performance of a process. The term usually refers to the management of business processes and manufacturing processes. Business process management (BPM) and business process reengineering are interrelated, but not identical.
Process management is the application of knowledge, skills, tools, techniques and systems to define, visualize, measure, control, report and improve processes with the goal to meet customer requirements profitably. It can be differentiated from program management in that program management is concerned with managing a group of inter-dependent projects. But from another viewpoint, process management includes program management. In project management, process management is the use of a repeatable process to improve the outcome of the project.
Process (Computing)
In computing, a process is an instance of a computer program that is being executed. It contains the program code and its current activity. Depending on the operating system (OS), a process may be made up of multiple threads of execution that execute instructions concurrently.
A computer program is a passive collection of instructions, a process is the actual execution of those instructions. Several processes may be associated with the same program; for example, opening up several instances of the same program often means more than one process is being executed.
Multitasking is a method to allow multiple processes to share processors (CPUs) and other system resources. Each CPU executes a single task at a time. However, multitasking allows each processor to switch between tasks that are being executed without having to wait for each task to finish. Depending on the operating system implementation, switches could be performed when tasks perform input/output operations, when a task indicates that it can be switched, or on hardware interrupts.
A common form of multitasking is time-sharing. Time-sharing is a method to allow fast response for interactive user applications. In time-sharing systems, context switches are performed rapidly. This makes it seem like multiple processes are being executed simultaneously on the same processor. The execution of multiple processes seemingly simultaneously is called concurrency.
For security and reliability reasons most modern operating systems prevent direct communication between independent processes, providing strictly mediated and controlled inter-process communication functionality.
The operating system holds most of this information about active processes in data structures called process control blocks
Any subset of resource, but typically at least the processor state, may be associated with each of the process' threads in operating systems that support threads or 'daughter' processes.
The operating system keeps its processes separated and allocates the resources they need so that they are less likely to interfere with each other and cause system failures (e.g., deadlock or thrashing). The operating system may also provide mechanisms for inter-process communication to enable processes to interact in safe and predictable ways.
Process Management (Computing)
A multitasking* operating system may just switch between processes to give the appearance of many processes executing concurrently or simultaneously, though in fact only one process can be executing at any one time on a single-core CPU (unless using multi-threading or other similar technology).
It is usual to associate a single process with a main program, and 'daughter' ('child') processes with any spin-off, parallel processes, which behave like asynchronous subroutines. A process is said to own resources, of which an image of its program (in memory) is one such resource. (Note, however, that in multiprocessing systems, many processes may run off of, or share, the same reentrant program at the same location in memory but each process is said to own its own image of the program.)
Processes are often called tasks in embedded operating systems. The sense of 'process' (or task) is 'something that takes up time', as opposed to 'memory', which is 'something that takes up space'. (Historically, the terms 'task' and 'process' were used interchangeably, but the term 'task' seems to be dropping from the computer lexicon.)
The above description applies to both processes managed by an operating system, and processes as defined by process calculi.
If a process requests something for which it must wait, it will be blocked. When the process is in the Blocked State, it is eligible for swapping to disk, but this is transparent in a virtual memory system, where blocks of memory values may be really on disk and not in main memory at any time. Note that even unused portions of active processes/tasks (executing programs) are eligible for swapping to disk. All parts of an executing program and its data do not have to be in physical memory for the associated process to be active.
Process States
The various process states, displayed in a state diagram, with arrows indicating possible transitions between states.
An operating system kernel that allows multi-tasking needs processes to have certain states. Names for these states are not standardised, but they have similar functionality.
* First, the process is "created" - it is loaded from a secondary storage device (hard disk or CD-ROM...) into main memory. After that the process scheduler assigns it the state "waiting".
* While the process is "waiting" it waits for the scheduler to do a so-called context switch and load the process into the processor. The process state then becomes "running", and the processor executes the process instructions.
* If a process needs to wait for a resource (wait for user input or file to open ...), it is assigned the "blocked" state. The process state is changed back to "waiting" when the process no longer needs to wait.
* Once the process finishes execution, or is terminated by the operating system, it is no longer needed. The process is removed instantly or is moved to the "terminated" state. When removed, it just waits to be removed from main memory.
Inter-process communication
When processes communicate with each other it is called "Inter-process communication" (IPC). Processes frequently need to communicate, for instance in a shell pipeline, the output of the first process need to pass to the second one, and so on to the other process.It is preferred in a well-structured way not using interrupts.
It is even possible for the two processes to be running on different machines. The operating system (OS) may differ from one process to the other, therefore some mediator(s) (called protocols) are needed.
Program Management
Program management or programme management is the process of managing several related projects, often with the intention of improving an organization's performance. In practice and in its aims it is often closely related to systems engineering and industrial engineering.
There are two different views of how programs differ from projects.
On one view, projects deliver outputs, discrete parcels or "chunks" of change programs create outcomes. On this view, a project might deliver a new factory, hospital or IT system. By combining these projects with other deliverables and changes, their programs might deliver increased income from a new product, shorter waiting lists at the hospital or reduced operating costs due to improved technology.
The other view is that a program is nothing more than either a large project or a set (or portfolio) of projects. On this second view, the point of having a program is to exploit economies of scale and to reduce coordination costs and risks. The project manager's job is to ensure that their project succeeds. The program manager, on the other hand, may not care about individual projects, but is concerned with the aggregate result or end-state. For example, in a financial institution a program may include one project that is designed to take advantage of a rising market, and another to protect against the downside of a falling market. These projects are opposites with respect to their success conditions, but they fit together in the same program.
According to the view that programs deliver outcomes but projects deliver outputs, program management is concerned with doing the right projects. The program manager has been described as 'playing chess' and keeping the overview in mind. The pieces to be used or sacrificed being the projects. Whereas project management is about doing projects right. And also according to this view, successful projects deliver on time, to budget and to specification, whereas successful programs deliver long term improvements to an organization. Improvements are usually identified through benefits. An organization should select the group of programs that most take it towards its strategic aims whilst remaining within its capacity to deliver the changes. On the other hand, the view that programs are simply large projects or a set of projects allows that a program may need to deliver tangible benefits quickly.
Consider the following set of projects:
* design of the new product - this delivers a design specification,
* modifications to the production line or factory - delivers a manufacturing capability,
* marketing - delivers advertisements, brochures and pamphlets,
* staff training - delivers staff trained to sell and support the new product.
One view has it that these are different projects within a program. But in practice they can just as well be managed as sub-projects within a single project. Which approach to choose? Program and project management are both practical disciplines, and the answer to such a question must be "whatever works." What works depends very much on the nature of the organization in which the project or program is run. Typically a program is broken down into projects that reflect the organization's structure. The design project will be run by the design team, the factory will manage the modifications to the production line, and so on. Organizational structure and organizational culture are key factors in how to structure a program.
The distinction between the terms "outcome" and "output" is far from clear, except in a trivial sense. Each of the projects listed in the example above is designed to deliver some 'thing', known as a 'deliverable' or an 'output', and together they improve the organization. Where one draws the line between the complete single benefit that causes the improvement and its component parts is partly a matter of preference and partly a matter of the culture and structure of the organization. Either way, benefits will normally be enjoyed long after the end of the program and all of its component projects. The point is that to achieve maximum benefits, there must be an integration of parts into a whole. Whether this integration is managed in something that is called a project or a program is of secondary importance to understanding the benefits and managing the process of integration well.
Many programs are concerned with delivering a capability to change. Only when that capability is transferred to the line management and utilized by the host organization will the benefits actually be delivered. On this view, a program team cannot, on their own, deliver benefits. Benefits can only be delivered through the utilization of a new capability.
Programs are normally designed to deliver the organization's strategy, such as an ambition to be the fourth biggest supermarket in a region by 2015 or reduce wastage by 5% in two year's time.
According to Project Management Institute (PMI), The Standard for Program Management, 2nd Ed., "A Program is a group of related projects managed in a coordinated manner to obtain benefits and control NOT available from managing them individually. Programs may include elements of related work outside of the scope of the discreet projects in the program... Some projects within a program can deliver useful incremental benefits to the organization before the program itself has completed."
Program management also emphasizes the coordinating and prioritizing of resources across projects, managing links between the projects and the overall costs and risks of the program.
Program management may provide a layer above the management of projects and focuses on selecting the best group of projects, defining them in terms of their objectives and providing an environment where projects can be run successfully. Program managers should not micromanage, but should leave project management to the project managers.
Many organizations only run one program at a time, a program containing all their projects. In Project Management Institute terminology, this is more likely to be a project portfolio than a program. Some larger organizations may have multiple programs each designed to deliver a range of improvements. Some organizations use the concept of Systems Engineering where others use program management.
The Processor
The processor tells your computer what to do and when to do it, it decides which tasks are more important and prioritizes them to your computers needs.There is and has been many processors on the market, running at many different speeds. The speed is measured in Megahertz or MHz. A single MHz is a calculation of 1 million cycles per second (or computer instructions), so if you have a processor running at 2000 MHz, then your computer is running at 2000,000,000 cycles per second, which in more basic terms is the amount of instructions your computer can carry out. Another important abbreviation is Gigahertz or GHz. A single GHz or 1 GHz is the same as 1000 MHz . Sounds a bit confusing, so here is a simple conversion :
1000 MHz (Megahertz) = 1GHz (Gigahertz) = 1000,000,000 Cycles per second (or computer instructions).
Now you can see why they abbreviate it, could you imagine going to a PC store and asking for a one thousand million cycle PC please. A bit of a mouth full isn’t it?
So when buying a new computer always look for fastest you can afford. The fastest on the market at the time of writing this article is 3.8 GHz (3800 MHz). Remember though that it is not necessary to purchase such a fast processor, balance your needs, do you really need top of the range? Especially when the difference say between a 3.5 GHz (3500 MHz) and a 3.8 GHz (3800 MHz) processor will be barely noticed (if noticed at all) by you, while the price difference is around £100. With the money you save you could get a nice printer and scanner package.
Now that we have covered the speeds, there is one more important subject to cover. Which processor? There are 3 competitors at present, the AMD Athlon, Intel Pentium and the Intel Celeron. They come in many guises, but basically the more cores they have and the higher the speed means better and faster.
Processors now come as dual core, triple core and quad core. These processors are the equivalent of running two cpu's (Dual core), three CPU's ( Triple core) or four (Quad core).
In the past Intel Pentium the best and most expensive of them all, and remains today one of the most popular on the market. In layman’s terms it is/was the designer processor, although AMD have some superb if not better releases and equally highly priced and advanced products. It would be hard to say which is best as they are direct competitors.
Lastly there is the Intel Celeron; this processor is a budget version of the Intel Pentium 4, the processor you find in most budget computers. If the purse is tight, and you need a computer, then this is your port of call. You will find many sub £400 computers fitted with this processor.
Langgan:
Catatan (Atom)

