Process generation in operating system pdf


















Microsoft expected that the first version of NT would kill off MS-DOS and all other versions of Windows since it was a vastly superior system, but it fizzled.

Only with Windows NT 4. Version 5 of Windows NT was renamed Windows in early It was intended to be the successor to both Windows 98 and Windows NT 4. That did not quite work out either, so Microsoft came out with yet another version of Windows 98 called Windows Me Millennium edition. The other major contender in the personal computer world is UNIX and its various derivatives.

UNIX is strongest on workstations and other high-end computers, such as network servers. It is especially popular on machines powered by high-performance RISC chips. On Pentium-based computers, Linux is becoming a popular alternative to Windows for students and increasingly many corporate users.

This system handles the basic window management, allowing users to create, delete, move, and resize windows using a mouse. An interesting development that began taking place during the mids is the growth of networks of personal computers running network operating systems and distributed operating systems Tanenbaum and Van Steen, In a network operating system, the users are aware of the existence of multiple computers and can log in to remote machines and copy files from one machine to another.

Each machine runs its own local operating system and has its own local user or users. Network operating systems are not fundamentally different from single-processor operating systems. They obviously need a network interface controller and some low-level software to drive it, as well as programs to achieve remote login and remote file access, but these additions do not change the essential structure of the operating system.

A distributed operating system, in contrast, is one that appears to its users as a traditional uniprocessor system, even though it is actually composed of multiple processors. The users should not be aware of where their programs are being run or where their files are located; that should all be handled automatically and efficiently by the operating system.

True distributed operating systems require more than just adding a little code to a uniprocessor operating system, because distributed and centralized systems differ in critical ways. Distributed systems, for example, often allow applications to run on several processors at the same time, thus requiring more complex processor scheduling algorithms in order to optimize the amount of parallelism.

Communication delays within the network often mean that these and other algorithms must run with incomplete, outdated, or even incorrect information. This situation is radically different from a single-processor system in which the operating system has complete information about the system state. In other words, after fertilization, a human egg goes through stages of being a fish, a pig, and so on before turning into a human baby.

Modern biologists regard this as a gross simplification, but it still has a kernel of truth in it. Something analogous has happened in the computer industry.

Each new species mainframe, minicomputer, personal computer, embedded computer, smart card, etc. The first mainframes were programmed entirely in assembly language. Even complex programs, like compilers and operating systems, were written in assembler. When microcomputers early personal computers were invented, they, too, were programmed in assembler, even though by then minicomputers were also programmed in high-level languages. Palmtop computers also started with assembly code but quickly moved on to high-level languages mostly because the development work was done on bigger machines.

The same is true for smart cards. Now let us look at operating systems. The first mainframes initially had no protection hardware and no support for multiprogramming, so they ran simple operating systems that handled one manually-loaded program at a time.

Later they acquired the hardware and operating system support to handle multiple programs at once, and then full timesharing capabilities. When minicomputers first appeared, they also had no protection hardware and ran one manually-loaded program at a time, even though multiprogramming was well established in the mainframe world by then.

Gradually, they acquired protection hardware and the ability to run two or more programs at once. The first microcomputers were also capable of running only one program at a time, but later acquired the ability to multiprogram. Palmtops and smart cards went the same route. Disks first appeared on large mainframes, then on minicomputers, microcomputers, and so on down the line.

Even now, smart cards do not have hard disks, but with the advent of flash ROM, they will soon have the equivalent of it. When disks first appeared, primitive file systems sprung up. On the CDC , easily the most powerful mainframe in the world during much of the s, the file system consisted of users having the ability to create a file and then declare it to be permanent, meaning it stayed on the disk even after the creating program exited.

To access such a file later, a program had to attach it with a special command and give its password supplied when the file was made permanent. In effect, there was a single directory shared by all users. It was up to the users to avoid file name conflicts. Early minicomputer file systems had a single directory shared by all users and so did early microcomputer file systems. Virtual memory the ability to run programs larger than the physical memory had a similar development.

It first appeared in mainframes, minicomputers, microcomputers and gradually worked its way down to smaller and smaller systems. Networking had a similar history. In all cases, the software development was dictated by the technology. The first microcomputers, for example, had something like 4 KB of memory and no protection hardware.

High-level languages and multiprogramming were simply too much for such a tiny system to handle. As the microcomputers evolved into modern personal computers, they acquired the necessary hardware and then the necessary software to handle more advanced features.

It is likely that this development will continue for years to come. Other fields may also have this wheel of reincarnation, but in the computer industry it seems to spin faster. I would like to receive exclusive offers and hear about products from InformIT and its family of brands. I can unsubscribe at any time. Pearson Education, Inc. This privacy notice provides an overview of our commitment to privacy and describes how we collect, protect, use and share personal information collected through this site.

Please note that other Pearson websites and online products and services have their own separate privacy policies. To conduct business and deliver products and services, Pearson collects and uses personal information in several ways in connection with this site, including:. For inquiries and questions, we collect the inquiry or question, together with name, contact details email address, phone number and mailing address and any other additional information voluntarily submitted to us through a Contact Us form or an email.

We use this information to address the inquiry and respond to the question. We use this information to complete transactions, fulfill orders, communicate with individuals placing orders or visiting the online store, and for related purposes.

Pearson may offer opportunities to provide feedback or participate in surveys, including surveys evaluating Pearson products, services or sites. Participation is voluntary. Pearson collects information requested in the survey questions and uses the information to evaluate, support, maintain and improve products, services or sites, develop new products and services, conduct educational research and for other purposes specified in the survey.

Occasionally, we may sponsor a contest or drawing. Participation is optional. Pearson collects name, contact information and other information specified on the entry form for the contest or drawing to conduct the contest or drawing. Pearson may collect additional personal information from the winners of a contest or drawing in order to award the prize and for tax reporting purposes, as required by law. If you have elected to receive email newsletters or promotional mailings and special offers but want to unsubscribe, simply email information informit.

On rare occasions it is necessary to send out a strictly service related announcement. For instance, if our service is temporarily suspended for maintenance we might send users an email. Generally, users may not opt-out of these communications, though they can deactivate their account information. However, these communications are not promotional in nature. We communicate with users on a regular basis to provide requested services and in regard to issues relating to their account we reply via email or phone in accordance with the users' wishes when a user submits their information through our Contact Us form.

Pearson automatically collects log data to help ensure the delivery, availability and security of this site. We use this information for support purposes and to monitor the health of the site, identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents and appropriately scale computing resources.

Pearson may use third party web trend analytical services, including Google Analytics, to collect visitor information, such as IP addresses, browser types, referring pages, pages visited and time spent on a particular site. While these analytical services collect and report information on an anonymous basis, they may use cookies to gather web trend information.

The information gathered may enable Pearson but not the third party web trend services to link information with application and system log data.

Pearson uses this information for system administration and to identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents, appropriately scale computing resources and otherwise support and deliver this site and its services. This site uses cookies and similar technologies to personalize content, measure traffic patterns, control security, track use and access of information on this site, and provide interest-based messages and advertising.

Users can manage and block the use of cookies through their browser. Disabling or blocking certain cookies may limit the functionality of this site. Pearson uses appropriate physical, administrative and technical security measures to protect personal information from unauthorized access, use and disclosure. Pearson may provide personal information to a third party service provider on a restricted basis to provide marketing solely on behalf of Pearson or an affiliate or customer for whom Pearson is a service provider.

Marketing preferences may be changed at any time. If a user's personally identifiable information changes such as your postal address or email address , we provide a way to correct or update that user's personal data provided to us.

Security starts with each user having to authenticate to the system, usually by means of a password. In multiprocess environment, it is possible that, one process to interface with the other, or with the operating system, so protection is required.

System calls provide the interface between a process and the operating system. A system call instruction is an instruction that generates an interrupt that cause the operating system to gain control of the processor. Types of System Call: A system call is made using the system call machine language instruction.

System calls can be grouped into five major categories. File management 2. Interprocess communication 3.

Process management 4. Information maintenance. Following are few common services provided by operating systems. Each of these activities is encapsulated as a process.

A process includes the complete execution context code to execute, data to manipulate, registers, OS resources in use. Following are the major activities of an operating system with respect to program management. Drivers hides the peculiarities of specific hardware devices from the user as the device driver knows the peculiarities of the specific device.

Operating System manages the communication between user and device drivers. File System manipulation A file represents a collection of related information. Computer can store files on the disk secondary storage , for long term storage purpose.

Each of these media has its own properties like speed, capacity, data transfer rate and data access methods. A file system is normally organized into directories for easy navigation and usage.

Following are the major activities of an operating system with respect to file management. Communication In case of distributed systems which are a collection of processors that do not share memory, peripheral devices, or a clock, operating system manages communications between processes. Multiple processes with one another through communication lines in the network. OS handles routing and connection strategies, and the problems of contention and security. Following are the major activities of an operating system with respect to communication.

Following are the major activities of an operating system with respect to error handling. Resource Management In case of multi-user or multi-tasking environment, resources such as main memory, CPU cycles and files storage are to be allocated to each user or job. Following are the major activities of an operating system with respect to resource management. Protection Considering computer systems having multiple users the concurrent execution of multiple processes, then the various processes must be protected from each another's activities.

Protection refers to mechanism or a way to control the access of programs, processes, or users to the resources defined by computer systems.

Following are the major activities of an operating system with respect to protection. Various properties of an Operating System Following are few of very important tasks that Operating System handles. Batch processing Batch processing is a technique in which Operating System collects one programs and data together in a batch before processing starts.

Operating system does the following activities related to batch processing. Multitasking Multitasking refers to term where multiple jobs are executed by the CPU simultaneously by switching between them. Switches occur so frequently that the users may interact with each program while it is running.

Operating system does the following activities related to multitasking. During this time a CPU can be utilized by another process. Since each action or command in a time-shared system tends to be short, only a little CPU time is needed for each user. When two or more programs are residing in memory at the same time, then sharing the processor is referred to the multiprogramming. Multiprogramming assumes a single shared processor.

Following figure shows the memory layout for a multiprogramming system. Operating system does the following activities related to multiprogramming. Interactivity Interactivity refers that a User is capable to interact with computer system. Operating system does the following activities related to interactivity.

For example, keyboard. For example, Monitor. Real Time System Real time systems represents are usually dedicated embedded systems. Operating system does the following activities related to real time system activity. Distributed Environment Distributed environment refers to multiple independent CPUs or processors in a computer system. Operating system does the following activities related to distributed environment.

Spooling Spooling is an acronym for simultaneous peripheral operations on line. Process A process is a program in execution. The execution of a process must progress in a sequential fashion. Definition of process is following. Components of a process are following. Status 4 Verifies the status of the process execution. A process can run to completion only when all requested resources have been allocated to the process. Two or more processes could be executing the same program, each using their own data and resources.

Program A program by itself is not a process. It is a static entity made up of program statement while process is a dynamic entity. Program contains the instructions to be executed by processor. A program takes a space at single place in main memory and continues to stay there. A program does not perform any action by itself. Process States As a process executes, it changes state. The state of a process is defined as the current activity of the process.

Process can have one of the following five states at a time. Ready 2 The process is waiting to be assigned to a processor. Ready processes are waiting to have the processor allocated to them by the operating system so that they can run. Running 3 Process instructions are being executed i. The process that is currently being executed. PCB is the data structure used by the operating system.

Operating system groups all information that needs about particular process. PCB contains many pieces of information associated with a specific process which is described below. Pointer is used for maintaining the scheduling list. Process State 2 Process state may be new, ready, running, waiting and so on. Program Counter Program Counter indicates the address of the next instruction to be 3 executed for this process. CPU registers CPU registers include general purpose register, stack pointers, index 4 registers and accumulators etc.

Memory management information This information may include the value of base and limit registers, the 5 page tables, or the segment tables depending on the memory system used by the operating system. This information is useful for deallocating the memory when the process terminates.

Accounting information This information includes the amount of CPU and real time used, 6 time limits, job or process numbers, account numbers etc.

The PCB serves as the repository for any information which can vary from process to process. By this technique, the hardware state can be restored so that the process can be scheduled to run again. Definition The process scheduling is the activity of the process manager that handles the removal of the running process from the CPU and the selection of another process on the basis of a particular strategy. Process scheduling is an essential part of a Multiprogramming operating system.

Such operating systems allow more than one process to be loaded into the executable memory at a time and loaded process shares the CPU using time multiplexing. Scheduling Queues Scheduling queues refers to queues of processes or devices. When the process enters into the system, then this process is put into a job queue. This queue consists of all processes in the system.

The operating system also maintains other queues such as device queue. Each device has its own device queue. This figure shows the queuing diagram of process scheduling. Processes waits in ready queue for allocating the CPU. Once the CPU is assigned to a process, then that process will execute.

While executing the process, any one of the following events can occur. Two state process model refers to running and non-running states which are described below. Non-Running Processes that are not running are kept in queue, waiting for their turn to execute. Each entry in the queue is a pointer to a particular process. Queue is implemented by using linked list. Use of dispatcher is as follows. When 2 a process is interrupted, that process is transferred in the waiting queue.

If the process has completed or aborted, the process is discarded. In either case, the dispatcher then selects a process from the queue to execute. Schedulers Schedulers are special system software which handles process scheduling in various ways. Their main task is to select the jobs to be submitted into the system and to decide which process to run. Long term scheduler determines which programs are admitted to the system for processing.

Job scheduler selects processes from the queue and loads them into memory for execution. Process loads into the memory for CPU scheduling. It also controls the degree of multiprogramming.

If the degree of multiprogramming is stable, then the average rate of process creation must be equal to the average departure rate of processes leaving the system. On some systems, the long term scheduler may not be available or minimal. Time-sharing operating systems have no long term scheduler. When process changes the state from new to ready, then there is use of long term scheduler. Main objective is increasing system performance in accordance with the chosen set of criteria.

It is the change of ready state to running state of the process. CPU scheduler selects process among the processes that are ready to execute and allocates CPU to one of them. Short term scheduler also known as dispatcher, execute most frequently and makes the fine grained decision of which process to execute next. Short term scheduler is faster than long term scheduler. Medium Term Scheduler Medium term scheduling is part of the swapping. It removes the processes from the memory.

It reduces the degree of multiprogramming. The medium term scheduler is in-charge of handling the swapped out-processes. Suspended processes cannot make any progress towards completion. In this condition, to remove the process from memory and make space for other process, the suspended process is moved to the secondary storage.

This process is called swapping, and the process is said to be swapped out or rolled out. Swapping may be necessary to improve the process mix. It controls the degree of It provides lesser control multiprogramming over degree of It reduces the degree of 3 multiprogramming multiprogramming.

It is almost absent or It is also minimal in time It is a part of Time sharing 4 minimal in time sharing sharing system systems. A context switch is the mechanism to store and restore the state or context of a CPU in Process Control block so that a process execution can be resumed from the same point at a later time.

Using this technique a context switcher enables multiple processes to share a single CPU. Context switching is an essential part of a multitasking operating system features. When the scheduler switches the CPU from executing one process to execute another, the context switcher saves the content of all processor registers for the process being removed from the CPU, in its process descriptor. The context of a process is represented in the process control block of a process.

Context switch time is pure overhead. Context switching can significantly affect performance as modern computers have a lot of general and status registers to be saved. Content switching times are highly dependent on hardware support.

Context switching Some hardware systems employ two or more sets of processor registers to reduce the amount of context switching time. When the process is switched, the following information is stored. Process with highest priority is to be executed first and so on. Process is preempted and other process executes for given time period. Dining Philosophers Problem The scenario involves five philosophers sitting at a round table with a bowl of food and five chopsticks.

Each chopstick sits between two adjacent philosophers. The philosophers are allowed to think and eat. Since two chopsticks are required for each philosopher to eat, and only five chopsticks exist at the table, no two adjacent philosophers may be eating at the same time. A scheduling problem arises as to who gets to eat at what time.

This problem is similar to the problem of scheduling processes that require a limited number of resources Problems The problem was designed to illustrate the challenges of avoiding deadlock, a system state in which no progress is possible.

This attempted solution fails because it allows the system to reach a deadlock state, in which no progress is possible. This is a state in which each philosopher has picked up the fork to the left, and is waiting for the fork to the right to become available. What is Thread? A thread is a flow of execution through the process code, with its own program counter, system registers and stack.

A thread is also called a light weight process. Threads provide a way to improve application performance through parallelism. Threads represent a software approach to improving performance of operating system by reducing the overhead thread is equivalent to a classical process. Each thread belongs to exactly one process and no thread can exist outside a process. Each thread represents a separate flow of control. Threads have been successfully used in implementing network servers and web server.

They also provide a suitable foundation for parallel execution of applications on shared memory multiprocessors. Following figure shows the working of the single and multithreaded processes. Process Thread Process is heavy weight or Thread is light weight taking lesser 1 resource intensive.

Process switching needs Thread switching does not need to 1 interaction with operating system. In multiple processing environments each process All threads can share same set of open 1 executes the same code but has its files, child processes. If one process is blocked then no While one thread is blocked and other process can execute until the waiting, second thread in the same task 1 first process is unblocked.

Multiple processes without using Multiple threaded processes use 1 threads use more resources. In multiple processes each process One thread can read, write or change 1 operates independently of the another thread's data. The thread library contains code for creating and destroying threads, for passing message and data between threads, for scheduling thread execution and for saving and restoring thread contexts.

The application begins with a single thread and begins running in that thread. Kernel Level Threads In this case, thread management done by the Kernel. There is no thread management code in the application area.

Kernel threads are supported directly by the operating system. Any application can be programmed to be multithreaded. All of the threads within an application are supported within a single process. Scheduling by the Kernel is done on a thread basis. The Kernel performs thread creation, scheduling and management in Kernel space. Kernel threads are generally slower to create and manage than the user threads.

Some operating system provides a combined user level thread and Kernel level thread facility. Solaris is a good example of this combined approach. Many to Many Model In this model, many user level threads multiplexes to the Kernel thread of smaller or equal numbers.

The number of Kernel threads may be specific to either a particular application or a particular machine. Many to One Model Many to one model maps many user level threads to one Kernel level thread. Thread management is done in user space. When thread makes a blocking system call, the entire process will be blocks. Only one thread can access the Kernel at a time, so multiple threads are unable to run in parallel on multiprocessors.

If the user level thread libraries are implemented in the operating system in such a way that system does not support them then Kernel threads use the many to one relationship modes. One to One Model There is one to one relationship of user level thread to the kernel level thread.

This model provides more concurrency than the many to one model. It also another thread to run when a thread makes a blocking system call. It support multiple thread to execute in parallel on microprocessors. Disadvantage of this model is that creating user thread requires the corresponding Kernel thread.

Implementation is by a thread Operating system supports creation of 2 library at the user level. Kernel threads. User level thread is generic and Kernel level thread is specific to the 3 can run on any operating system.

Multi-threaded application cannot Kernel routines themselves can 4 take advantage of multiprocessing. Race Condition? A race condition is an undesirable situation that occurs when a device or system attempts to perform two or more operations at the same time, but because of the nature of the device or system, the operations must be done in the proper sequence to be done correctly.

A race condition occurs when two threads access a shared variable at the same time. The first thread reads the variable, and the second thread reads the same value from the variable. Then the first thread and second thread perform their operations on the value, and they race to see which thread can write the value last to the shared variable.

The value of the thread that writes its value last is preserved, because the thread is writing over the value that the previous thread wrote. Memory management is the functionality of an operating system which handles or manages primary memory. Memory management keeps track of each and every memory location either it is allocated to some process or it is free.

These machines were known as mainframes and were locked in air-conditioned computer rooms with staff to operate them. The Batch System was introduced to reduce the wasted time in the computer.

A tray full of jobs was collected in the input room and read into the magnetic tape. After that, the tape was rewound and mounted on a tape drive. Then the batch operating system was loaded in which read the first job from the tape and ran it.

The output was written on the second tape. After the whole batch was done, the input and output tapes were removed and the output tape was printed. This used integrated circuits and provided a major price and performance advantage over the second generation systems.

The third generation operating systems also introduced multiprogramming. Another job was scheduled on the processor so that its time would not be wasted.



0コメント

  • 1000 / 1000