当前位置:文档之家› 现代操作系统(第三版)答案

现代操作系统(第三版)答案

现代操作系统(第三版)答案
现代操作系统(第三版)答案

MODERN

OPERATING

SYSTEMS

SECONDEDITION

PROBLEMSOLUTIONS

ANDREWS.TANENBAUM VrijeUniversiteit

Amsterdam,TheNetherlands

PRENTICEHALL

UPPERSADDLERIVER,NJ07458

课后答案网

https://www.doczj.com/doc/b19369986.html,

课后答案网

https://www.doczj.com/doc/b19369986.html,

SOLUTIONS TO CHAPTER 1 PROBLEMS

1. An operating system must provide the users with an extended (i.e., virtual)

machine, and it must manage the I/O devices and other system resources.

2. Multiprogramming is the rapid switching of the CPU between multiple

processes in memory. It is commonly used to keep the CPU busy while one or more processes are doing I/O.

3. Input spooling is the technique of reading in jobs, for example, from cards,

onto the disk, so that when the currently executing processes are finished, there will be work waiting for the CPU. Output spooling consists of first copying printable files to disk before printing them, rather than printing directly as the output is generated. Input spooling on a personal computer is not very likely, but output spooling is.

4. The prime reason for multiprogramming is to give the CPU something to do while waiting for I/O to complete. If there is no DMA, the CPU is fully occupied doing I/O, so there is nothing to be gained (at least in terms of CPU utilization) by multiprogramming. No matter how much I/O a program does, the CPU will be 100 percent busy. This of course assumes the major delay is the wait while data are copied. A CPU could do other work if the I/O were slow for other reasons (arriving on a serial line, for instance).

5. Second generation computers did not have the necessary hardware to protect

the operating system from malicious user programs.

6. It is still alive. For example, Intel makes Pentium I, II, and III, and 4 CPUs with a variety of different properties including speed and power consumption.

All of these machines are architecturally compatible. They differ only in price and performance, which is the essence of the family idea.

7. A 25 × 80 character monochrome text screen requires a 2000-byte buffer. The

1024 × 768 pixel 24-bit color bitmap requires 2,359,296 bytes. In 1980 these two options would have cost $10 and $11,520, respectively. For current prices, check on how much RAM currently costs, probably less than $1/MB.

8. Choices (a), (c), and (d) should be restricted to kernel mode.

9. Personal computer systems are always interactive, often with only a single

user. Mainframe systems nearly always emphasize batch or timesharing with many users. Protection is much more of an issue on mainframe systems, as is efficient use of all resources.

2 PROBLEM SOLUTIONS FOR CHAPTER 1

课后答案网

https://www.doczj.com/doc/b19369986.html,

10. Every nanosecond one instruction emerges from the pipeline. This means the

machine is executing 1 billion instructions per second. It does not matter at all how many stages the pipeline has. A 10-stage pipeline with 1 nsec per stage would also execute 1 billion instructions per second. All that matters is how often a finished instructions pops out the end of the pipeline.

11. The manuscript contains 80 × 50 × 700 = 2.8 million characters. This is, of

course, impossible to fit into the registers of any currently available CPU and is too big for a 1-MB cache, but if such hardware were available, the manuscript could be scanned in 2.8 msec from the registers or 5.8 msec from the cache. There are approximately 2700 1024-byte blocks of data, so scanning from the disk would require about 27 seconds, and from tape 2 minutes 7 seconds. Of course, these times are just to read the data. Processing and rewriting the data would increase the time.

12. Logically, it does not matter if the limit register uses a virtual address or a

physical address. However, the performance of the former is better. If virtual addresses are used, the addition of the virtual address and the base register can start simultaneously with the comparison and then can run in parallel. If physical addresses are used, the comparison cannot start until the addition is complete, increasing the access time. 13. Maybe. If the caller gets control back and immediately overwrites the data,

when the write finally occurs, the wrong data will be written. However, if the driver first copies the data to a private buffer before returning, then the caller can be allowed to continue immediately. Another possibility is to allow the caller to continue and give it a signal when the buffer may be reused, but this is tricky and error prone.

14. A trap is caused by the program and is synchronous with it. If the program is run again and again, the trap will always occur at exactly the same position in

the instruction stream. An interrupt is caused by an external event and its timing is not reproducible.

15. Base = 40,000 and limit = 10,000. An answer of limit = 50,000 is incorrect for

the way the system was described in this book. It could have been implemented that way, but doing so would have required waiting until the address + base calculation was completed before starting the limit check, thus slowing down the computer.

16. The process table is needed to store the state of a process that is currently

suspended, either ready or blocked. It is not needed in a single process system because the single process is never suspended.

17.Mounting a file system makes any files already in the mount point directory

inaccessible, so mount points are normally empty. However, a system administrator might want to copy some of the most important files normally located in the mounted directory to the mount point so they could be found in their normal path in an emergency when the mounted device was being checked or repaired.

PROBLEM SOLUTIONS FOR CHAPTER 1 4 课后答案网

https://www.doczj.com/doc/b19369986.html, 18. Fork can fail if there are no free slots left in the process table (and possibly if

there is no memory or swap space left). Exec can fail if the file name given does not exist or is not a valid executable file. Unlink can fail if the file to be unlinked does not exist or the calling process does not have the authority to unlink it.

19. If the call fails, for example because fd is incorrect, it can return ?1. It can

also fail because the disk is full and it is not possible to write the number of bytes requested. On a correct termination, it always returns nbytes .

20. It contains the bytes: 1, 5, 9, 2.

21. Block special files consist of numbered blocks, each of which can be read or

written independently of all the other ones. It is possible to seek to any block and start reading or writing. This is not possible with character special files.

22. System calls do not really have names, other than in a documentation sense.

When the library procedure read traps to the kernel, it puts the number of the system call in a register or on the stack. This number is used to index into a table. There is really no name used anywhere. On the other hand, the name of the library procedure is very important, since that is what appears in the program.

23. Yes it can, especially if the kernel is a message-passing system.

24. As far as program logic is concerned it does not matter whether a call to a

library procedure results in a system call. But if performance is an issue, if a task can be accomplished without a system call the program will run faster. Every system call involves overhead time in switching from the user context to the kernel context. Furthermore, on a multiuser system the operating system may schedule another process to run when a system call completes, further slowing the progress in real time of a calling process. 25. Several UNIX calls have no counterpart in the Win32 API:

Link: a Win32 program cannot refer to a file by an alternate name or see it in more than one directory. Also, attempting to create a link is a convenient way to test for and create a lock on a file.

Mount and umount: a Windows program cannot make assumptions about standard path names because on systems with multiple disk drives the drive name part of the path may be different.

Chmod: Windows programmers have to assume that every user can access every file.

PROBLEM SOLUTIONS FOR CHAPTER 2 5

课后答案网

https://www.doczj.com/doc/b19369986.html,

Kill: Windows programmers cannot kill a misbehaving program that is not cooperating.

26. The conversions are straightforward:

(a) A micro year is 10?6 × 365 × 24 × 3600 = 31.536 sec.

(b) 1000 meters or 1 km.

(c) There are 240 bytes, which is 1,099,511,627,776 bytes. (d) It is 6 × 1024 kg.

SOLUTIONS TO CHAPTER 2 PROBLEMS

1. The transition from blocked to running is conceivable. Suppose that a process

is blocked on I/O and the I/O finishes. If the CPU is otherwise idle, the process could go directly from blocked to running. The other missing transition, from ready to blocked, is impossible. A ready process cannot do I/O or anything else that might block it. Only a running process can block. 2. You could have a register containing a pointer to the current process table

entry. When I/O completed, the CPU would store the current machine state in the current process table entry. Then it would go to the interrupt vector for the interrupting device and fetch a pointer to another process table entry (the service procedure). This process would then be started up.

3. Generally, high-level languages do not allow one the kind of access to CPU

hardware that is required. For instance, an interrupt handler may be required to enable and disable the interrupt servicing a particular device, or to manipulate data within a process’ stack area. Also, inter rupt service routines must execute as rapidly as possible. 4. There are several reasons for using a separate stack for the kernel. Two of

them are as follows. First, you do not want the operating system to crash because a poorly written user program does not allow for enough stack space. Second, if the kernel leaves stack data in a user program’s memory space upon return from a system call, a malicious user might be able to use this data to find out information about other processes.

5. It would be difficult, if not impossible, to keep the file system consistent.

Suppose that a client process sends a request to server process 1 to update a file. This process updates the cache entry in its memory. Shortly thereafter,

6 PROBLEM SOLUTIONS FOR CHAPTER 1

课后答案网

https://www.doczj.com/doc/b19369986.html,

another client process sends a request to server 2 to read that file. Unfortunately, if the file is also cached there, server 2, in its innocence, will return obsolete data. If the first process writes the file through to the disk after caching it, and server 2 checks the disk on every read to see if its cached copy is up-to-date, the system can be made to work, but it is precisely all these disk accesses that the caching system is trying to avoid.

6. When a thread is stopped, it has values in the registers. They must be saved,

just as when the process is stopped the registers must be saved. Timesharing threads is no different than timesharing processes, so each thread needs its own register save area.

7. No. If a single-threaded process is blocked on the keyboard, it cannot fork.

8. A worker thread will block when it has to read a Web page from the disk. If

user-level threads are being used, this action will block the entire process, destroying the value of multithreading. Thus it is essential that kernel threads are used to permit some threads to block without affecting the others.

9. Threads in a process cooperate. They are not hostile to one another. If yielding

is needed for the good of the application, then a thread will yield. After all, it is usually the same programmer who writes the code for all of them.

10. User-level threads cannot be preempted by the clock uless the w hole process’

quantum has been used up. Kernel-level threads can be preempted individually. In the latter case, if a thread runs too long, the clock will interrupt the current process and thus the current thread. The kernel is free to pick a different thread from the same process to run next if it so desires. 11. In the single-threaded case, the cache hits take 15 msec and cache misses take

90 msec. The weighted average is 2/3 × 15 + 1/3 × 90. Thus the mean request takes 40 msec and the server can do 25 per second. For a multithreaded server, all the waiting for the disk is overlapped, so every request takes 15 msec, and the server can handle 66 2/3 requests per second.

12. Yes. If the server is entirely CPU bound, there is no need to have multiple threads. It just adds unnecessary complexity. As an example, consider a

telephone directory assistance number (like 555-1212) for an area with 1 million people. If each (name, telephone number) record is, say, 64 characters, the entire databa se takes 64 megabytes, and can easily be kept in the server’s memory to provide fast lookup.

13. The pointers are really necessary because the size of the global variable is

unknown. It could be anything from a character to an array of floating-point

PROBLEM SOLUTIONS FOR CHAPTER 2 7 numbers. If the value were stored, one would have to give the size to create3global, which is all right, but what type should the second parameter of set3global be, and what type should the value of read3global be?

14.It could happen that the runtime system is precisely at the point of blocking or

unblocking a thread, and is busy manipulating the scheduling queues. This would be a very inopportune moment for the clock interrupt handler to begin inspecting those queues to see if it was time to do thread switching, since they might be in an inconsistent state. One solution is to set a flag when the runtime system is entered. The clock handler would see this and set its own flag,

8 PROBLEM SOLUTIONS FOR CHAPTER 2

课后答案网

https://www.doczj.com/doc/b19369986.html,

then return. When the runtime system finished, it would check the clock flag, see that a clock interrupt occurred, and now run the clock handler.

15. Yes it is possible, but inefficient. A thread wanting to do a system call first sets

an alarm timer, then does the call. If the call blocks, the timer returns control to the threads package. Of course, most of the time the call will not block, and the timer has to be cleared. Thus each system call that might block has to be executed as three system calls. If timers go off prematurely, all kinds of problems can develop. This is not an attractive way to build a threads package.

16. The priority inversion problem occurs when a low-priority process is in its

critical region and suddenly a high-priority process becomes ready and is scheduled. If it uses busy waiting, it will run forever. With user-level threads, it cannot happen that a low-priority thread is suddenly preempted to allow a high-priority thread run. There is no preemption. With kernel-level threads this problem can arise. 17. Each thread calls procedures on its own, so it must have its own stack for the

local variables, return addresses, and so on. This is equally true for user-level threads as for kernel-level threads.

18. A race condition is a situation in which two (or more) processes are about to

perform some action. Depending on the exact timing, one or the other goes first. If one of the processes goes first, everything works, but if another one goes first, a fatal error occurs.

19. Yes. The simulated computer could be multiprogrammed. For example, while

process A is running, it reads out some shared variable. Then a simulated clock tick happens and process B runs. It also reads out the same variable. Then it adds 1 to the variable. When process A runs, if it also adds one to the variable, we have a race condition. 20. Yes, it still works, but it still is busy waiting, of course.

21. It certainly works with preemptive scheduling. In fact, it was designed for that

case. When scheduling is nonpreemptive, it might fail. Consider the case in which turn is initially 0 but process 1 runs first. It will just loop forever and never release the CPU.

22. Yes it can. The memory word is used as a flag, with 0 meaning that no one is

using the critical variables and 1 meaning that someone is using them. Put a 1 in the register, and swap the memory word and the register. If the register

PROBLEM SOLUTIONS FOR CHAPTER 2 9

课后答案网

https://www.doczj.com/doc/b19369986.html,

contains a 0 after the swap, access has been granted. If it contains a 1, access has been denied. When a process is done, it stores a 0 in the flag in memory.

23. To do a semaphore operation, the operating system first disables interrupts.

Then it reads the value of the semaphore. If it is doing a down and the semaphore is equal to zero, it puts the calling process on a list of blocked processes associated with the semaphore. If it is doing an up , it must check to see if any processes are blocked on the semaphore. If one or more processes are blocked, one of then is removed from the list of blocked processes and made runnable. When all these operations have been completed, interrupts can be enabled again.

24. Associated with each counting semaphore are two binary semaphores, M , used

for mutual exclusion, and B , used for blocking. Also associated with each counting semaphore is a counter that holds the number of up s minus the number of down s, and a list of processes blocked on that semaphore. To implement down , a process first gains exclusive access to the semaphores, counter, and list by doing a down on M . It then decrements the counter. If it is zero or more, it just does an up on M and exits. If M is negative, the process is put on the list of blocked processes. Then an up is done on M and a down is done on B to block the process. To implement up , first M is down ed to get mutual exclusion, and then the counter is incremented. If it is more than zero, no one was blocked, so all that needs to be done is to up M . If, however, the counter is now negative or zero, some process must be removed from the list. Finally, an up is done on B and M in that order. 25. If the program operates in phases and neither process may enter the next phase

until both are finished with the current phase, it makes perfect sense to use a barrier. 26. With round-robin scheduling it works. Sooner or later L will run, and

eventually it will leave its critical region. The point is, with priority scheduling, L never gets to run at all; with round robin, it gets a normal time slice periodically, so it has the chance to leave its critical region.

27. With kernel threads, a thread can block on a semaphore and the kernel can run

some other thread in the same process. Consequently, there is no problem using semaphores. With user-level threads, when one thread blocks on a semaphore, the kernel thinks the entire process is blocked and does not run it ever again. Consequently, the process fails.

28. It is very expensive to implement. Each time any variable that appears in a

predicate on which some process is waiting changes, the runtime system must

10 PROBLEM SOLUTIONS FOR CHAPTER 2

课后答案网

https://www.doczj.com/doc/b19369986.html, re-evaluate the predicate to see if the process can be unblocked. With the Hoare and Brinch Hansen monitors, processes can only be awakened on a signal primitive.

29. The employees communicate by passing messages: orders, food, and bags in

this case. In UNIX terms, the four processes are connected by pipes.

30. It does not lead to race conditions (nothing is ever lost), but it is effectively

busy waiting.

31. If a philosopher blocks, neighbors can later see that he is hungry by checking

his state, in test , so he can be awakened when the forks are available.

32. The change would mean that after a philosopher stopped eating, neither of his

neighbors could be chosen next. In fact, they would never be chosen. Suppose that philosopher 2 finished eating. He would run test for philosophers 1 and 3, and neither would be started, even though both were hungry and both forks were available. Similary, if philosopher 4 finished eating, philosopher 3 would not be started. Nothing would start him.

33. Variation 1: readers have priority. No writer may start when a reader is active. When a new reader appears, it may start immediately unless a writer

is currently active. When a writer finishes, if readers are waiting, they are all started, regardless of the presence of waiting writers. Variation 2: Writers have priority. No reader may start when a writer is waiting. When the last active process finishes, a writer is started, if there is one; otherwise, all the readers (if any) are started. Variation 3: symmetric version. When a reader is active, new readers may start immediately. When a writer finishes, a new writer has priority, if one is waiting. In other words, once we have started reading, we keep reading until there are no readers left. Similarly, once we have started writing, all pending writers are allowed to run.

34. It will need nT sec. 35. If a process occurs multiple times in the list, it will get multiple quanta per

cycle. This approach could be used to give more important processes a larger share of the CPU. But when the process blocks, all entries had better be removed from the list of runnable processes.

36. In simple cases it may be possible to determine whether I/O will be limiting by

looking at source code. For instance a program that reads all its input files into buffers at the start will probably not be I/O bound, but a problem that reads and writes incrementally to a number of different files (such as a compiler) is likely to be I/O bound. If the operating system provides a facility such as the UNIX ps command that can tell you the amount of CPU time used by a

PROBLEM SOLUTIONS FOR CHAPTER 2 11

课后答案网

https://www.doczj.com/doc/b19369986.html,

program , you can compare this with total time to complete execution of the program. This is, of course, most meaningful on a system where you are the only user.

37. For multiple processes in a pipeline, the common parent could pass to the

operating system information about the flow of data. With this information the OS could, for instance, determine which process could supply output to a process blocking on a call for input.

38. The CPU efficiency is the useful CPU time divided by the total CPU time.

When Q ≥ T , the basic cycle is for the process to run for T and undergo a process switch for S . Thus (a) and (b) have an efficiency of T/(S + T ). When the quantum is shorter than T , each run of T will require T/Q process switches, wasting a time ST/Q . The efficiency here is then 333333333T

T + ST/Q

which reduces to Q/(Q + S ), which is the answer to (c). For (d), we just substitute Q for S and find that the efficiency is 50 percent. Finally, for (e), as Q → 0 the efficiency goes to 0.

39. Shortest job first is the way to minimize average response time.

0 < X ≤ 3: X , 3, 5, 6, 9. 3

< X ≤ 5: 3, X , 5, 6, 9. 5 < X ≤ 6: 3, 5, X , 6, 9. 6

< X ≤ 9: 3, 5, 6, X , 9. X >

9: 3, 5, 6, 9, X.

40. For round robin, during the first 10 minutes each job gets 1/5 of the CPU. At

the end of 10 minutes, C finishes. During the next 8 minutes, each job gets 1/4 of the CPU, after which time D finishes. Then each of the three remaining jobs gets 1/3 of the CPU for 6 minutes, until B finishes, and so on. The finishing times for the five jobs are 10, 18, 24, 28, and 30, for an average of 22 minutes. For priority scheduling, B is run first. After 6 minutes it is finished. The other jobs finish at 14, 24, 26, and 30, for an average of 18.8 minutes. If the jobs run in the order A through E , they finish at 10, 16, 18, 22, and 30, for an average of 19.2 minutes. Finally, shortest job first yields finishing times of 2, 6, 12, 20, and 30, for an average of 14 minutes.

12 PROBLEM SOLUTIONS FOR CHAPTER 2

课后答案网

https://www.doczj.com/doc/b19369986.html,

41. The first time it gets 1 quantum. On succeeding runs it gets 2, 4, 8, and 15, so

it must be swapped in 5 times.

42. A check could be made to see if the program was expecting input and did

anything with it. A program that was not expecting input and did not process it would not get any special priority boost.

43. The sequence of predictions is 40, 30, 35, and now 25.

44. The fraction of the CPU used is 35/50 + 20/100 + 10/200 + x /250. To be

schedulable, this must be less than 1. Thus x must be less than 12.5 msec.

45. Two-level scheduling is needed when memory is too small to hold all the

ready processes. Some set of them is put into memory, and a choice is made from that set. From time to time, the set of in-core processes is adjusted. This algorithm is easy to implement and reasonably efficient, certainly a lot better than say, round robin without regard to whether a process was in memory or not.

46. The kernel could schedule processes by any means it wishes, but within each

process it runs threads strictly in priority order. By letting the user process set the priority of its own threads, the user controls the policy but the kernel handles the mechanism.

47. A possible shell script might be

if [ ! –f numbers ]; then echo 0 > numbers; fi

count=0 while (test $count != 200 ) do

count=‘expr $count + 1

‘ n=‘tail –1

numbers‘ expr $n +

1 >>numbers

done Run the script twice simultaneously, by starting it once in the background (using &) and again in the foreground. Then examine the file numbers . It will probably start out looking like an orderly list of numbers, but at some point it will lose its orderliness, due to the race condition created by running two copies of the script. The race can be avoided by having each copy of the script test for and set a lock on the file before entering the critical area, and unlocking it upon leaving the critical area. This can be done like this: if ln numbers numbers.lock

then

PROBLEM SOLUTIONS FOR CHAPTER 2 13 n=‘tail –1

numbers‘ expr $n +

1 >>numbers rm

numbers.lock fi

This version will just skip a turn when the file is inaccessible, variant solutions could put the process to sleep, do busy waiting, or count only loops in which the operation is successful.

SOLUTIONS TO CHAPTER 3 PROBLEMS

1.In the U.S., consider a presidential election in which three or more candidates

are trying for the nomination of some party. After all the primary elections

14 PROBLEM SOLUTIONS FOR CHAPTER 3

课后答案网

https://www.doczj.com/doc/b19369986.html,

are finished, when the delegates arrive at the party convention, it could happen that no candidate has a majority and that no delegate is willing to change his or her vote. This is a deadlock. Each candidate has some resources (votes) but needs more to get the job done. In countries with multiple political parties in the parliament, it could happen that each party supports a different version of the annual budget and that it is impossible to assemble a majority to pass the budget. This is also a deadlock.

2. If the printer starts to print a file before the entire file has been received (this is

often allowed to speed response), the disk may fill with other requests that can’t be printed until the first file is done, but which use up dis k space needed to receive the file currently being printed. If the spooler does not start to print a file until the entire file has been spooled it can reject a request that is too big. Starting to print a file is equivalent to reserving the printer; if the reservation is deferred until it is known that the entire file can be received, a deadlock of the entire system can be avoided. The user with the file that won’t fit is still deadlocked of course, and must go to another facility that permits printing bigger files. 3. The printer is nonpreemptable; the system cannot start printing another job

until the previous one is complete. The spool disk is preemptable; you can delete an incomplete file that is growing too large and have the user send it later, assuming the protocol allows that

4. Yes. It does not make any difference whatsoever.

5. Yes, illegal graphs exist. We stated that a resource may only be held by a

single process. An arc from a resource square to a process circle indicates that the process owns the resource. Thus a square with arcs going from it to two or more processes means that all those processes hold the resource, which violates the rules. Consequently, any graph in which multiple arcs leave a square and end in different circles violates the rules. Arcs from squares to squares or from circles to circles also violate the rules.

6. A portion of all such resources could be reserved for use only by processes

owned by the administrator, so he or she could always run a shell and programs needed to evaluate a deadlock and make decisions about which processes to kill to make the system usable again.

7. Neither change leads to deadlock. There is no circular wait in either case.

8. V oluntary relinquishment of a resource is most similar to recovery through

preemption. The essential difference is that computer processes are not

PROBLEM SOLUTIONS FOR CHAPTER 3 15

课后答案网

https://www.doczj.com/doc/b19369986.html,

expected to solve such problems on their own. Preemption is analogous to the operator or the operating system acting as a policeman, overriding the normal rules individual processes obey.

9. The process is asking for more resources than the system has. There is no

conceivable way it can get these resources, so it can never finish, even if no other processes want any resources at all.

10. If the system had two or more CPUs, two or more processes could run in

parallel, leading to diagonal trajectories.

11. Yes. Do the whole thing in three dimensions. The z -axis measures the number

of instructions executed by the third process.

12. The method can only be used to guide the scheduling if the exact instant at

which a resource is going to be claimed is known in advance. In practice, this is rarely the case.

13. A request from D is unsafe, but one from C is safe.

14. There are states that are neither safe nor deadlocked, but which lead to

deadlocked states. As an example, suppose we have four resources: tapes, plotters, scanners, and CD-ROMs, as in the text, and three processes competing for them. We could have the following situation: Has Needs Available

A: 2 0 0 0 1 0 2 0 0 1 2 1

B: 1 0 0 0 0 1 3 1 C: 0 1 2 1 1 0 1 0

This state is not deadlocked because many actions can still occur, for example,

A can still get two printers. However, if each process asks for its remaining requirements, we have a deadlock.

15. The system is deadlock free. Suppose that each process has one resource.

There is one resource free. Either process can ask for it and get it, in which case it can finish and release both resources. Consequently deadlock is impossible.

16. If a process has m resources it can finish and cannot be involved in a deadlock.

Therefore, the worst case is where every process has m ? 1 resources and needs another one. If there is one resource left over, one process can finish and release all its resources, letting the rest finish too. Therefore the condition for avoiding deadlock is r ≥ p (m ? 1) + 1.

16 PROBLEM SOLUTIONS FOR CHAPTER 3

课后答案网

https://www.doczj.com/doc/b19369986.html,

17. No. D can still finish. When it finishes, it returns enough resources to allow E

(or A ) to finish, and so on.

18. With three processes, each one can have two drives. With four processes, the

distribution of drives will be (2, 2, 1, 1), allowing the first two processes to finish. With five processes, the distribution will be (2, 1, 1, 1, 1), which still allows the first one to finish. With six, each holding one tape drive and wanting another, we have a deadlock. Thus for n < 6 the system is deadlock-free.

19. Comparing a row in the matrix to the vector of available resources takes m

operations. This step must be repeated on the order of n times to find a process that can finish and be marked as done. Thus marking a process as done takes on the order or mn steps. Repeating the algorithm for all n processes means that the number of steps is then mn 2.

20. The needs matix is as follows:

0 1 0 0 2

0 2 1 0 0

1 0 3 0 0

0 0 1 1 1

If x is 0, we have a deadlock immediately. If x is 1, process D can run to completion. When it is finished, the available vector is 1 1 2 2 1. Unfortunately we are now deadlocked. If x is 2, after D runs, the available vector is 1 1 3 2 1 and C can run. After it finishes and returns its resources the available vector is 2 2 3 3 1, which will allow B to run and complete, and then

A to run and complete. Therefore, the smallest value of x that avoids a deadlock is

2. 21. Yes. Suppose that all the mailboxes are empty. Now A sends to B and waits for

a reply, B sends to C and waits for a reply, and C sends to A and waits for a reply. All the conditions for deadlock are now fulfilled.

22. Suppose that process A requests the records in the order a , b , c . If process B

also asks for a first, one of them will get it and the other will block. This situation is always deadlock free since the winner can now run to completion without interference. Of the four other combinations, some may lead to deadlock and some are deadlock free. The six cases are as follows:

a b c deadlock free a c b

d eadlock fre

e b a c p ossible

PROBLEM SOLUTIONS FOR CHAPTER 3 17

课后答案网

https://www.doczj.com/doc/b19369986.html,

deadlock b c a p ossible

deadlock c a b p ossible

deadlock c b a p ossible

deadlock

Since four of the six may lead to deadlock, there is a 1/3 chance of avoiding a deadlock and a 2/3 chance of getting one.

23. Two-phase locking eliminates deadlocks, but introduces potential starvation. A

process has to keep trying and failing to acquire all of its records. There is no upper bound on how long it may take.

24. To avoid circular wait, number the resources (the accounts) with their account

numbers. After reading an input line, a process locks the lower-numbered account first, then when it gets the lock (which may entail waiting), it locks the other one. Since no process ever waits for an account lower than what it already has, there is never a circular wait, hence never a deadlock.

25. Change the semantics of requesting a new resource as follows. If a process

asks for a new resource and it is available, it gets the resource and keeps what it already has. If the new resource is not available, all existing resources are released. With this scenario, deadlock is impossible and there is no danger that the new resource is acquired but existing ones lost. Of course, the process only works if releasing a resource is possible (you can release a scanner between pages or a CD recorder between CDs).

26. I’d give it an F (failing) grade. What does the process do? Since it clearly

needs the resource, it just asks again and blocks again. This is no better than staying blocked. In fact, it may be worse since the system may keep track of how long competing processes have been waiting and assign a newly freed resource to the process that has been waiting longest. By periodically timing out and trying again, a process loses its seniority. 27. If both programs ask for Woofer first, the computers will starve with the

endless sequence: request Woofer, cancel request, request Woofer, cancel request, etc. If one of them asks for the doghouse and the other asks for the dog, we have a deadlock, which is detected by both parties and then broken, but it is just repeated on the next cycle. Either way, if both computers have been programmed to go after the dog or the doghouse first, either starvation or deadlock ensues. There is not really much difference between the two here. In most deadlock problems, starvation does not seem serious because introducing

random delays will usually make it very

unlikely. That approach does not work here.

18 PROBLEM SOLUTIONS FOR CHAPTER 3

SOLUTIONS TO CHAPTER 4 PROBLEMS

1.The chance that all four processes are idle is 1/16, so the CPU idle time is

1/16.

2.If each job has 50% I/O wait, then it will take 20 minutes to complete in the

absence of competition. If run sequentially, the second one will finish 40 minutes after the first one starts. With two jobs, the approximate CPU utilization is 1 ?0.52. Thus each one gets 0.375 CPU minute per minute of real time. To accumulate 10 minutes of CPU time, a job must run for 10/0.375 minutes or about 26.67 minutes. Thus running sequentially the jobs finish after

40 minutes, but running in parallel they finish after 26.67 minutes.

3.Almost the entire memory has to be copied, which requires each word to be

read and then rewritten at a different location. Reading 4 bytes takes 10 nsec,

PROBLEM SOLUTIONS FOR CHAPTER 4 19

课后答案网

https://www.doczj.com/doc/b19369986.html,

so reading 1 byte takes 2.5 nsec and writing it takes another 2.5 nsec, for a total of 5 nsec per byte compacted. This is a rate of 200,000,000 bytes/sec. To copy 128 MB (227 bytes, which is about 1.34 × 108 bytes), the computer needs 227 /200,000,000 sec, which is about 671 msec. This number is slightly pessimistic because if the initial hole at the bottom of memory is k bytes, those k bytes do not need to be copied. However, if there are many holes and many data segments, the holes will be small so k will be small and the error in the calculation will also be small.

4. The bitmap needs 1 bit per allocation unit. With 227 /n allocation units, this is

224 /n bytes. The linked list has 227 /216 or 211 nodes, each of 8 bytes for a total of 214 bytes. For small n , the linked list is better. For large n , the bitmap is better. The crossover point can be calculated by equating these two formulas and solving for n . The result is 1 KB. For n smaller than 1 KB, a linked list is better. For n larger than 1 KB, a bitmap is better. Of course, the assumption of segments and holes alternating every 64 KB is very unrealistic. Also, we need n <= 64 KB if the segments and holes are 64 KB. 5. First fit takes 20 KB, 10 KB, 18 KB. Best fit takes 12 KB, 10 KB, and 9 KB.

Worst fit takes 20 KB, 18 KB, and 15 KB. Next fit takes 20 KB, 18 KB, and 9 KB.

6. Real memory uses physical addresses. These are the numbers that the memory

chips react to on the bus. Virtual addresses are the logical addresses that refer to a process’ address space. Thus a machine with a 16-bit word can generate virtual addresses up to 64K, regardless of whether the machine has more or less memory than 64 KB. 7. For a 4-KB page size the (page, offset) pairs are (4, 3616), (8, 0), and (14,

2656). For an 8-KB page size they are (2, 3616), (4, 0), (7, 2656).

8. (a) 8212 (b) 4100 (c) 24684

9. They built an MMU and inserted it between the 8086 and the bus. Thus all

8086 physical addresses went into the MMU as virtual addresses. The MMU then mapped them onto physical addresses, which went to the bus.

10. The total virtual address space for all the processes combined is nv so this

much storage is needed for pages. However an amount r can be in RAM, so the amount of disk storage required is only nv ? r . This amount is far more than is ever needed in practice because rarely will there be n processes

现代操作系统(第三版)答案

MODERN OPERATING SYSTEMS SECOND EDITION PROBLEM SOLUTIONS ANDREW S.TANENBAUM Vrije Universiteit Amsterdam,The Netherlands PRENTICE HALL UPPER SADDLE RIVER,NJ 07458课后答案网 w w w .k h d a w .c o m

SOLUTIONS TO CHAPTER 1PROBLEMS 1.An operating system must provide the users with an extended (i.e.,virtual)machine,and it must manage the I/O devices and other system resources. 2.Multiprogramming is the rapid switching of the CPU between multiple processes in memory.It is commonly used to keep the CPU busy while one or more processes are doing I/O. 3.Input spooling is the technique of reading in jobs,for example,from cards,onto the disk,so that when the currently executing processes are ?nished,there will be work waiting for the CPU.Output spooling consists of ?rst copying printable ?les to disk before printing them,rather than printing directly as the output is generated.Input spooling on a personal computer is not very likely,but output spooling is. 4.The prime reason for multiprogramming is to give the CPU something to do while waiting for I/O to complete.If there is no DMA,the CPU is fully occu-pied doing I/O,so there is nothing to be gained (at least in terms of CPU utili-zation)by multiprogramming.No matter how much I/O a program does,the CPU will be 100percent busy.This of course assumes the major delay is the wait while data are copied.A CPU could do other work if the I/O were slow for other reasons (arriving on a serial line,for instance). 5.Second generation computers did not have the necessary hardware to protect the operating system from malicious user programs. 6.It is still alive.For example,Intel makes Pentium I,II,and III,and 4CPUs with a variety of different properties including speed and power consumption.All of these machines are architecturally compatible.They differ only in price and performance,which is the essence of the family idea. 7.A 25×80character monochrome text screen requires a 2000-byte buffer.The 1024×768pixel 24-bit color bitmap requires 2,359,296bytes.In 1980these two options would have cost $10and $11,520,respectively.For current prices,check on how much RAM currently costs,probably less than $1/MB. 8.Choices (a),(c),and (d)should be restricted to kernel mode. 9.Personal computer systems are always interactive,often with only a single user.Mainframe systems nearly always emphasize batch or timesharing with many users.Protection is much more of an issue on mainframe systems,as is ef?cient use of all resources. 10.Every nanosecond one instruction emerges from the pipeline.This means the machine is executing 1billion instructions per second.It does not matter at all how many stages the pipeline has.A 10-stage pipeline with 1nsec per 课后答案网 w w w .k h d a w .c o m

《现代操作系统第四版》第三章答案

第三章内存管理习题 1.IBM360有一个设计,为了对2KB大小的块进行加锁,会对每个块分配一个4bit的密钥,这个密钥存在PSW(程序状态字)中,每次内存引用时,CPU都会进行密钥比较。但该设计有诸多缺陷,除了描述中所言,请另外提出至少两条缺点。 A:密钥只有四位,故内存只能同时容纳最多十六个进程;需要用特殊硬件进行比较,同时保证操作迅速。 2.在图3-3中基址和界限寄存器含有相同的值16384,这是巧合,还是它们总是相等?如果这只是巧合,为什么在这个例子里它们是相等的? A:巧合。基地址寄存器的值是进程在内存上加载的地址;界限寄存器指示存储区的长度。 3.交换系统通过紧缩来消除空闲区。假设有很多空闲区和数据段随机分布,并且读或写32位长的字需要10ns的时间,紧缩128MB大概需要多长时间?为了简单起见,假设空闲区中含有字0,内存中最高地址处含有有效数据。 A:32bit=4Byte===>每字节10/4=2.5ns 128MB=1282^20=2^27Byte 对每个字节既要读又要写,22.5*2^27=671ms 4.在一个交换系统中,按内存地址排列的空闲区大小是10MB,4MB,20MB,18MB,7MB,9MB,12MB,和15MB。对于连续的段请求: (a) 12MB (b) 10MB (c) 9MB

使用首次适配算法,将找出哪个空闲区?使用最佳适配、最差适配、下次适配算法呢? A:首次适配算法:20MB,10MB,18MB;最佳适配算法:12MB,10MB,9MB;最差适配算法:20MB;18MB;15MB;下次适配算法:20MB;18MB;9MB; 5.物理地址和虚拟地址有什么区别? A:实际内存使用物理地址。这些是存储器芯片在总线上反应的数字。虚拟地址是指一个进程的地址空间的逻辑地址。因此,具有32位字的机器可以生成高达4GB的虚拟地址,而不管机器的内存是否多于或少于4GB。 6.对下面的每个十进制虚拟地址,分別使用4KB页面和8KB页面计算虚拟页号和偏移量:20000,32768,60000。 A:转换为二进制分别为:0100111000100000 虚拟地址应该是16位1000000000000000 1110101001100000 4KB页面偏移量范围0~4027,需要12位来存储偏移量,剩下4位作为页号;同理8KB页面需要13位来存储偏移量,剩下3位作为页号;所以,4KB | 8KB 页号| 偏移量| 页号| 偏移量20000 | 0100 111000100000 | 010 0111000100000 32768 | 1000 000000000000 | 100 0000000000000 60000 | 1110 101001100000 | 111 0101001100000 7. 使用图3-9的页表,给出下面每个虚拟地址对应的物理地址:

N套_操作系统期末试卷(含答案)

一、选择题 1、在现代操作系统中引入了(),从而使并发和共享成为可能。 A.单道程序 B. 磁盘 C. 对象 D.多道程序 2、( )操作系统允许在一台主机上同时连接多台终端,多个用户可以通过各自的终端同时交互地使用计算机。 A.网络 B.分布式 C.分时 D.实时 3、从用户的观点看,操作系统是()。 A. 用户与计算机硬件之间的接口 B.控制和管理计算机资源的软件 C. 合理组织计算机工作流程的软件 D.计算机资源的的管理者 4、当CPU处于管态时,它可以执行的指令是()。 A. 计算机系统中的全部指令 B. 仅限于非特权指令 C. 仅限于访管指令 D. 仅限于特权指令 5、用户在程序中试图读取某文件的第100个逻辑块时,使用操作系统提供的()接口。 A. 系统调用 B.图形用户接口 C.原语 D.键盘命令 6、下列几种关于进程的叙述,()最不符合操作系统对进程的理解 A.进程是在多程序并行环境中的完整的程序。 B.进程可以由程序、数据和进程控制块描述。 C.线程是一种特殊的进程。 D.进程是程序在一个数据集合上运行的过程,它是系统进行资源分配和调度的一个独立单位。 7、当一个进程处于()状态时,称其为等待(或阻塞)状态。 A. 它正等待中央处理机 B. 它正等待合作进程的一个消息 C. 它正等待分给它一个时间片 D. 它正等待进入内存 8、一个进程释放一种资源将有可能导致一个或几个进程()。 A.由就绪变运行 B.由运行变就绪 C.由阻塞变运行 D.由阻塞变就绪 9、下面关于线程的叙述中,正确的是()。 A.不论是系统支持线程还是用户级线程,其切换都需要内核的支持。 B.线程是资源的分配单位,进程是调度和分配的单位。 C.不管系统中是否有线程,进程都是拥有资源的独立单位。 D.在引入线程的系统中,进程仍是资源分配和调度分派的基本单位。 10、设有3个作业,它们同时到达,运行时间分别为T1、T2和T3,且T1≤T2≤T3,若它们在单处理机系统中按单道运行,采用短作业优先调度算法,则平均周转时间为()。 A. T1+T2+T3 B. (T1+T2+T3)/3 C. T1+T2/3+2*T3/3 3+2*T2/3+T1 11、在下面的I/O控制方式中,需要CPU干预最少的方式是()。 A.程序I/O方式 B.中断驱动I/O控制方式 C.直接存储器访问DMA控制方式D.I/O通道控制方式 12、有m个进程共享同一临界资源,若使用信号量机制实现对一临界资源的互斥访问,则信号量的变

嵌入式经典书籍100册

嵌入式工程师必读100本专业书籍 ——从小白到大牛你只差这100本书《大话数据结构》 《鸟哥的linux私房菜》 《疯狂android讲义》 《第一行代码》 《linux内核设计与实现》 《驱动设计开发》 《linux内核解密》 《unix环境高级编程》 《linux内核设计与实现》 《essential C++》 《嵌入式linux》 《linux设备驱动》 《c语言深度解剖》 《linux下的c编程》 《C Primer Plus(第五版)》 《ARM体系结构与编程(第二版)》 《lINUX设备驱动开发详解(第三版)》 《android开发艺术探讨》 《c++plus》 《Unix环境高级编程》 《与大数据同行——学习和教育的未来》 《用户体验的要素》 《编程与艺术》 《ARM嵌入式体系结构与接口技术》 《cortex-m0接口编程》 《C语言程序设计:现代方法》 《C++ Primer》

《数据结构》(严蔚敏) 《算法导论》 《Linux设备驱动开发》 《代码大全》 《深入理解计算机系统》 《UNIX环境高级编程》 《计算机安全原理》 《UNIX网络编程》 《HeadFirst设计模式》 《linux驱动》(宋保华) 《C++ primer4》 《qt5精彩实例》 《ldd3》 《C++高级编程》 《c语言教程》 《实战linux编程精髓》 《ARM教程》 《JAVA编程思想》 《HTML+CSS网页设计与布局从入门到精通》《C语言深度解剖》 《深度实践嵌入式Linux系统移植》 《unix高级编程》 《c嵌入式一站式教学》 《编译原理》 《深度实践嵌入式Linux系统移植》《UNIX环境高级编程》 《linux网络编程》 《C语言程序设计》 《unix环境高级编程》 《嵌入式linuxc语言程序设计基础教程》

《现代操作系统第四版》 第六章 答案

第四章文件系统习题 Q1: 给出文件/etc/passwd的五种不同的路径名。(提示:考虑目录项”.”和”…”。) A: /etc/passwd /./etc/passwd /././etc/passwd /./././etc/passwd /etc/…/etc/passwd /etc/…/etc/…/etc/passwd /etc/…/etc/…/etc/…/etc/passwd /etc/…/etc/…/etc/…/etc/…/etc/passwd Q2:在Windows中,当用户双击资源管理器中列出的一个文件时,就会运行一个程序,并以这个文件作为参数。操作系统要知道运行的是哪个程序,请给出两种不同的方法。 A:Windows使用文件扩展名。每种文件扩展名对应一种文件类型和某些能处理这种类型的程序。另一种方式时记住哪个程序创建了该文件,并运行那个程序。Macintosh以这种方式工作。

Q3:在早期的UNIX系统中,可执行文件(a.out)以一个非常特別的魔数开始,这个数不是随机选择的。这些文件都有文件头,后面是正文段和数据段。为什么要为可执行文件挑选一个非常特别的魔数,而其他类型文件的第一个字反而有一个或多或少是随机选择的魔数? A:这些系统直接把程序载入内存,并且从word0(魔数)开始执行。为了避免将header作为代码执行,魔数是一条branch指令,其目标地址正好在header之上。按这种方法,就可能把二进制文件直接读取到新的进程地址空间,并且从0 开始运行。 Q4: 在UNIX中open系统调用绝对需要吗?如果没有会产生什么结果? A: open调用的目的是:把文件属性和磁盘地址表装入内存,便与后续调用的快速访问。 首先,如果没有open系统调用,每次读取文件都需要指定要打开的文件的名称。系统将必须获取其i节点,虽然可以缓存它,但面临一个问题是何时将i节点写回磁盘。可以在超时后写回磁盘,虽然这有点笨拙,但它可能起作用。 Q5:在支持顺序文件的系统中总有一个文件回绕操作,支持随机存取

现代操作系统第四版 第二章 答案

现代操作系统第二章进程与线程习题 1. 图2-2中给出了三个进程状态,在理论上,三个状态可以有六种转换,每个状态两个。但是,图中只给出了四种转换。有没有可能发生其他两种转换中的一个或两个 A:从阻塞到运行的转换是可以想象的。假设某个进程在I/O上阻塞,而且I/O结束,如果此时CPU空闲,该进程就可以从阻塞态直接转到运行态。而另外一种转换(从阻塞态到就绪态)是不可能的。一个就绪进程是不可能做任何会产生阻塞的I/O或者别的什么事情。只有运行的进程才能被阻塞。 2.假设要设计一种先进的计算机体系结构,它使用硬件而不是中断来完成进程切换。CPU需要哪些信息请描述用硬件完成进程切换的工作过程。 A:应该有一个寄存器包含当前进程表项的指针。当I/O结束时,CPU将把当前的机器状态存入到当前进程表项中。然后,将转到中断设备的中断向量,读取另一个过程表项的指针(服务例程),然后,就可以启动这个进程了。 3.当代计算机中,为什么中断处理程序至少有一部分是用汇编语言编写的 A:通常,高级语言不允许访问CPU硬件,而这种访问是必需的。例如,中断处理程序可能需要禁用和启用某个特定设备的中断服务,或者处理进程堆栈区的数据。另外,中断服务例程需要尽快地执行。(补充)主要是出于效率方面的考量。中断处理程序需要在尽量短的时间内完成所需的必要处理,尽量减少对线程/程序流造成的影响,因此大部分情况下用汇编直接编写,跳过了通用编译过程中冗余的适配部分。 4.中断或系统调用把控制转给操作系统时,为什么通常会用到与被中断进程的栈分离的内核栈 A:内核使用单独的堆栈有若干的原因。其中两个原因如下:首先,不希望操作系统崩溃,由于某些用户程序不允许足够的堆栈空间。第二,如果内核将数据保留在用户空间,然后从系统调用返回,那么恶意的用户可能使用这些数据找出某些关于其它进程的信息。 5.一个计算机系统的内存有足够的空间容纳5个程序。这些程序有一半的时间处于等待I/O的空闲状态。请问CPU时间浪费的比例是多少 A:^5 =%

现代操作系统试卷及其答案

1.一般用户更喜欢使用的系统是()。 A.手工操作 B.单道批处理 C.多道批处理 D.多用户分时系统 2. 与计算机硬件关系最密切的软件是()。 A.编译程序 B.数据库管理系统 C.游戏程序 D.OS 3. 现代OS具有并发性和共享性,是()的引入导致的。 A.单道程序 B. 磁盘 C. 对象 D.多道程序 4. 早期的OS主要追求的是()。 A.系统的效率 B.用户的方便性 C.可移植 D.可扩充性 5.()不是多道程序系统 A.单用户单任务 B.多道批处理系统 C.单用户多任务 D.多用户分时系统 6.()是多道操作系统不可缺少的硬件支持。 A.打印机 B.中断机构 C.软盘 D.鼠标 7. 特权指令可以在()执行。 A.目态 B.浏览器中 C.任意的时间 D.进程调度中 8. 没有了()计算机系统就启动不起来。 A.编译器 B.DBMS C.OS D.浏览器 9. 通道能够完成()之间的数据传输。 A.CPU与外设 B.内存与外设 C.CPU与主存 D.外设与外设 10. 操作系统的主要功能有()。 A.进程管理、存储器管理、设备管理、处理机管理 B.虚拟存储管理、处理机管理、进程调度、文件系统 C.处理机管理、存储器管理、设备管理、文件系统 D.进程管理、中断管理、设备管理、文件系统 11. 单处理机计算机系统中,()是并行操作的。 A.处理机的操作与通道的操作是并行的 B.程序与程序 C.主程序与子程序 D.用户程序与操作系统程序 12. 处理机的所有指令可以在()执行。 A.目态 B.浏览器中

C.任意的时间 D.系统态 13.()功能不是操作系统直接完成的功能。 A.管理计算机硬盘 B.对程序进行编译 C.实现虚拟存储器 D.删除文件 14. 要求在规定的时间内对外界的请求必须给予及时响应的OS是()。 A.多用户分时系统 B.实时系统 C.批处理系统时间 D.网络操作系统 15. 操作系统是对()进行管理的软件。 A.硬件 B.软件 C.计算机资源 D.应用程序 16.()对多用户分时系统最重要。 A.实时性 B.交互性 C.共享性 D.运行效率 17.()对多道批处理系统最重要。 A.实时性 B.交互性 C.共享性 D.运行效率 18. ( )对实时系统最重要。 A.及时性 B.交互性 C.共享性 D.运行效率 19. Windows98是()操作系统。 A.多用户分时 B.批处理系统 C.单用户多任务 D.单用单任务 20. 分布式系统与网络系统的主要区别是() A.并行性 B.透明性 C.共享性 D.复杂性 21. ( )操作系统允许在一台主机上同时连接多台终端,多个用户可以通过各自的终端同时交互地使用计算机。 A.网络 B.分布式 C.分时 D.实时 22. 如果分时操作系统的时间片一定,那么(),则响应时间越长。 A.用户数越少 B.用户数越多 C.内存越小 D.内存越大 23. 下面6个系统中,必须是实时操作系统的有()个。 ·航空订票系统 ·过程控制系统 ·机器口语翻译系统 ·计算机辅助系统

《操作系统》期末试卷(含答案)

计算机网络试题 一、选择题 1、在现代操作系统中引入了(),从而使并发和共享成为可能。 A.单道程序 B. 磁盘 C. 对象 D.多道程序 2、( )操作系统允许在一台主机上同时连接多台终端,多个用户可以通过各自的终端同时交互地使用计算机。 A.网络 B.分布式 C.分时 D.实时 3、从用户的观点看,操作系统是()。 A. 用户与计算机硬件之间的接口 B.控制和管理计算机资源的软件 C. 合理组织计算机工作流程的软件 D.计算机资源的的管理者 4、当CPU处于管态时,它可以执行的指令是()。 A. 计算机系统中的全部指令 B. 仅限于非特权指令 C. 仅限于访管指令 D. 仅限于特权指令 5、用户在程序中试图读取某文件的第100个逻辑块时,使用操作系统提供的()接口。 A. 系统调用 B.图形用户接口 C.原语 D.键盘命令 6、下列几种关于进程的叙述,()最不符合操作系统对进程的理解? A.进程是在多程序并行环境中的完整的程序。 B.进程可以由程序、数据和进程控制块描述。 C.线程是一种特殊的进程。 D.进程是程序在一个数据集合上运行的过程,它是系统进行资源分配和调度的一个独立单位。 7、当一个进程处于()状态时,称其为等待(或阻塞)状态。 A. 它正等待中央处理机 B. 它正等待合作进程的一个消息 C. 它正等待分给它一个时间片 D. 它正等待进入内存 8、一个进程释放一种资源将有可能导致一个或几个进程()。 A.由就绪变运行 B.由运行变就绪 C.由阻塞变运行 D.由阻塞变就绪 9、下面关于线程的叙述中,正确的是()。 A.不论是系统支持线程还是用户级线程,其切换都需要内核的支持。 B.线程是资源的分配单位,进程是调度和分配的单位。 C.不管系统中是否有线程,进程都是拥有资源的独立单位。 D.在引入线程的系统中,进程仍是资源分配和调度分派的基本单位。

研究生操作系统与计算机网络

212博士研究生《操作系统和计算机网络》 入学测试大纲 第一部分测试说明 一、测试性质 《操作系统和计算机网络》入学测试是华中科技大学计算机科学和技术相关专业所设置的一个测试科目。它的评价标准是高等学校计算机学科优秀硕士毕业生能达到的及格或及格以上水平,以保证录取的博士生具有一定的计算机专业理论及软件开发能力,以利于计算机学科相关专业各导师择优选拔。 测试对象为参加博士生入学测试的硕士毕业生,以及具有同等学历的在职人员。 二、测试的学科范围 操作系统的外特性,包括:操作系统的定义、功能、特性、类型以及现代操作系统的 用户界面和操作系统的逻辑结构。 操作系统资源管理的策略、功能和实现技术,包括:处理机管理、存储管理、设备管 理和文件系统。了解操作系统的重要论题:死锁概念。 并发处理的概念及实现技术,包括:进程概念,进程状态及变迁、进程控制、进程同 步互斥的基本原理和实现技术,了解进程通信的概念。 实例操作系统,如UNIX系统和Windows系统的类型、结构、用户界面,进程管理、 设备管理和文件管理的有关问题。 计算机网络的基本概念、局域网、广域网、网络互连、网络使用及安全等。 三、评价目标 “现代操作系统”科目测试是要考核学生对操作系统、计算机网络基本概念、基本原理 及实现技术的掌握程度,考核学生融会贯通及综合运用所学知识的能力。 本科目的测试要求学生能正确理解操作系统、计算机网络的基本概念、基本原理和实 现技术,能灵活地运用所学知识分析问题、解决问题。 四、测试形式和试卷结构 1. 答卷方式:闭卷,笔试。 2. 答题时间:180分钟 3. 考查内容及考查比例 操作系统测试内容占70%,计算机网络内容占30%。 考查内容分为较易、较难、难三个等级,每份试卷中不同难度试题的分配比例是3 : 4 :3 。基本概念和基础知识约占30%,需要灵活地运用所学知识来解决问题的试题约 占40%,需要综合几个知识点来解决问题的试题约占30%。 题目的形式可以有多样,如:名词解释、填空题、选择题、判断改错题、问答题、计 算题、图解题、分析论述题、编程题。题型不是关键,最关键的是对基本概念、基本原理 和实现技术的真正理解,对知识点的掌握程度。因为,针对任一个知识点都可以产生多个 不同类型的试题。 五、参考书目 1.《操作系统原理》第三版庞丽萍编著华中科技大学出版社。

江西理工大学-现代操作系统考试复习题

第一章:引论 1.系统调用与中断的概念。 作业题解 第一章引论 PE1-14. 陷阱和中断的主要差别是什么? 答:陷阱是由程序造成的,并且与它同步。如果程序一而再地被运行,陷阱将总在指令流中相同的位置的精确发生。而中断则是由外部事件和其他时钟造成的,不具有重复性。 PE1-20. 有一个文件,其文件描述符是fd,内含下列字节序列:3,1,4,1,5,9,2,6,5,3,5.有如下系统调用: lseek (fd, 3, SEEK_SET); // 从文件开头偏移量为3,此时将读写位置移到文件1,5,9,2的1处 Read(fd, &buffer, 4); 其中lseek调用寻找文件中的字节3.在读操作完成之后,buffer中的内容是什么? 答:包含字节:1,5,9,2。 PE1-22. 块特殊文件和字符特殊文件的基本差别是什么? 答:块特殊文件包含被编号的块,每一块都可以独立地读取或者写入。而且可以定位于任何块,并且开始读出或写入。这些对于字符特殊文件是不可能的。 PE1-29. 下面是单位转换练习: (a)一微年是多少秒? (b)微米常称micron.那么gigamicron是多长? (c)1TB存储器中有多少字节? (d)地球的质量是6000 yottagram,换算成kilogram是多少? 答:这些都可以直接转换: (a) micro year = 10-6X 365 X 24 X 3600 = 31.536 sec。 (b) 1km或者1000。 (c)有240字节,也就是1,099,511,627,776 字节。 (d)它是6 X 1024公斤。 第二章:进程与线程 1.进程的概念。 答:进程是对正在运行的程序的一个抽象。是容纳运行一个程序所需要的所有信息的容器。也可以说一个进程就是就是一个正在运行的实例。 2.进程的三种基本状态。 运行态(该时刻进程实际占用CPU)。 就绪态(可运行,但因为其他进程正在运行而暂时停止)。 阻塞态(除非某种外部事件发生,否则进程不能运行)。

操作系统试题(绝密)

“操作系统”复习提纲 2010-6-26 1.什么是操作系统?如何理解它的“机器扩展(extended machine)”和“资源 管理(resource management)”两个基本能力? 答:操作系统是计算机系统中的一个系统软件,是一些程序模块的集合 1、它们能以尽量有效、合理的方式组织和管理计算机的软硬件资源 2、合理的组织计算机的工作流程,控制程序的执行并向用户提供各种服务功能 3、使得用户能够灵活、方便、有效的使用计算机,使整个计算机系统能高效地运 行 作为扩展机器的操作系统,为程序员隐藏硬件的实际细节,并提供一个可以读写 的、简洁的命名文件视图的程序。它还隐藏了大量与中断、定时器、存储管理以 及其他与底层特征有关的令人烦恼的细节。 作为资源管理者的操作系统,主要任务是记录使用资源的情况、对资源的请求进 行授权、计算使用费用,并且为不同的程序和用户协调互相冲突的资源请求。在 时间和空间上实现共享资源的复用。 2.中断发生时,操作系统底层的运行框架(Skeleton)? 答:1)硬件压入堆栈技术器等 2)硬件从中断向量装入新的程序计数器 3)汇编语言过程保存寄存器值 4)汇编语言过程设置新的堆栈 5)C中断服务例程运行(典型的读和缓冲输入) 6)调度程序决定下一个将运行的进程 7)C过程返回至汇编代码 8)汇编语言过程开始运行新的当前进程。 3.什么是进程和线程,区别是什么? 答:线程是在进程内用于调度和占有处理机的基本单位,它由线程控制表、存储线程上下文的用户栈以及核心栈组成。进程是程序的一次动态执行过程,它也是系统资源分配的基本单位,它能和其他进程并发执行。 线程与进程的主要区别:进程是资源管理的基本单位,线程只是处理机调度的基本单位。进程进行处理机切换和调度时间较长,而线程进行处理机切换和调度时间较短,不发生资源的变化。线程和进程一样,都有自己的状态,也有相应的同步机制,不过由于线程没有单独的数据和程序空间,因此线程没有挂起状态。进程的调度、同步机制大多数由操作系统内核完成,而线程的控制既可以由操作系统内核进行,也可以由用户控制进行。 4.针对如下的多线程Web Server代码,图式说明进程和线程的结构关系。

现代操作系统(中文第三版)习题答案

现代操作系统(第三版)习题答案 cztqwan 2017-06-19 (部分内容来源于网络,转载请注明出处)

目录 第一章绪论 (1) 第二章进程与线程 (8) 第三章存储管理 (21) 第四章文件系统 (32) 第五章输入/输出 (42) 第六章死锁 (55) 第七章多媒体操作系统 (65) 第八章多处理机系统 (76) 第九章安全 (88) 第十章实例研究1:Linux (100) 第十一章实例研究2:Windows Vista (110) 第十二章实例研究3:Symbian操作系统 (110) 第十三章操作系统设计 (110)

第一章绪论 1、什么是多道程序设计? 答:多道程序设计技术是指在内存同时放若于道程序,使它们在系统中并发执行,共享系统中的各种资源。当一道程序暂停执行时,CPU立即转去执行另一道程序。 2、什么是SPOOLing? 读者是否认为将来的高级个人计算机会把SPOOLing作为标准功能? 答:(假脱机技术)输入SPOOLing是作业中的读入技术,例如,从卡片在磁盘,这样当当前执行的进程完成时,将等候CPU。输出SPOOLing在打印之前首先复制打印文件,而非直接打印。在个人计算机上的输入SPOOLing很少,但是输出SPOOLing非常普遍。 3、在早期计算机中,每个字节的读写直接由CPU处理(即没有DMA),对于多道程序而言这种组织方式有什么含义? 答:多道程序的主要原因是当等候I/O完成时CPU有事可做。如果没有DMA。I/O操作时CPU被完全占有,因此,多道程序无利可图(至少在CPU利用方面)。无论程序作多少I/O操作,CPU都是100%的忙碌。当然,这里假定主要的延迟是数据复制时的等待。如果I/O很慢的话,CPU可以做其它工作。 4、系列计算机的思想在20世纪60年代由IBM引入进System/360大型机。现在这种思想已经消亡了还是继续活跃着? 答:它依然存在。例如,Intel以各种各样的不同的属性包括速度和能力消耗来生产Pentium I,II,III和4。所有这些机器的体系结构都是兼容的,仅仅是价格上的不同,这些都是家族思想的本质。 5、缓慢采用GUI的一个原因是支持它的硬件的成本(高昂)。为了支持25行80列字符的单色文本屏幕应该需要多少视颊RAM? 对于1024x768像素24位色彩位图需要多少视频RAM? 在1980年($5/KB)这些RAM的成本是多少?现在它的成本是多少? 答:25*80字符的单色文本屏幕需要2000字节的缓冲器。1024*768像素24位颜色的位图需要2359296字节。1980年这两种选择将分别地耗费$10和$11520。而对于当前的价格,将少于$1/MB。 6、在建立一个操作系统时有几个设计目的,例如资源利用、及时性、健壮性等。请列举两个可能互相矛盾的设计目的。 答:考虑公平和实时。公平要求每一个进程都以公平的方式分配资源,没有进程能得到超过公平份额的资源。另一方面,实时要求使进程在规定的时间内执行完毕的基础上分配资源。一个实时的进程可能会得到一个不成比例的资源份额。(非

现代操作系统(原书第3版)部分课后答案-第4章

1. 这些系统直接把程序载入内存,并且从word0(魔数)开始执行。为了避免将header作为代码执行,魔数是一条branch指令,其目标地址正好在header之上。按这种方法,就可能把二进制文件直接读取到新的进程地址空间,并且从0开始运行。 5. rename 调用不会改变文件的创建时间和最后的修改时间,但是创建一个新的文件,其创建时间和最后的修改时间都会改为当前的系统时间。另外,如果磁盘满,复制可能会失败。 10. 由于这些被浪费的空间在分配单元(文件)之间,而不是在它们内部,因此,这是外部碎片。这类似于交换系统或者纯分段系统中出现的外部碎片。 11. 传输前的延迟是9ms,传输速率是2^23Bytes/s,文件大小是2^13Bytes,故从内存读取或写回磁盘的时间都是9+2^13/2^23=9.977ms,总共复制一个文件需要9.977*2=19.954ms。为了压缩8G磁盘,也就是2^20个文件,每个需要19.954ms,总共就需要20,923 秒。因此,在每个文件删除后都压缩磁盘不是一个好办法。 12. 因为在系统删除的所有文件都会以碎片的形式存在磁盘中,当碎

片到达一定量磁盘就不能再装文件了,必须进行外部清理,所以紧缩磁盘会释放更多的存储空间,但在每个文件删除后都压缩磁盘不是一个好办法。 15. 由于1024KB = 2^20B, 所以可以容纳的磁盘地址个数是2^20/4 = 2^18个磁盘地址,间接块可以保存2^18个磁盘地址。与 10 个直接的磁盘地址一道,最大文件有 262154 块。由于每块为 1 MB,最大的文件是262154 MB。 19. 每个磁盘地址需要D位,且有F个空闲块,故需要空闲表为DF位,采用位图法则需要B位,当DF

现代操作系统--作业题整理演示教学

注:标有“操作系统第二版中文版答案”的答案是从操作系统第二版中文答案的电子书上摘抄的,剩下的是非标准答案(可以忽略~~)。有几道题没有写。以下的相关文档仅供参考!祝各位同学考试愉快! 第一章:引论(P44) 1、什么是多道程序设计? 答:多道程序就是CPU在内存中多个进程之间迅速切换。它一般被用来使CPU 保持忙碌,当有一个或多个进程进行I/O时。(操作系统第二版中文答案) 2、什么是SPOOLing?读者是否认为将来的高级个人计算机会把SPOOLing作为标准功能? 答:SPOOLing是Simultaneous Peripheral Operation On-Line (即外部设备联机并行操作)的缩写,它是关于慢速字符设备如何与计算机主机交换信息的一种技术,通常称为“假脱机技术”。(回答:什么是SPOOLing?百度的~~~)输入SPOOLing是作业中的读入技术,例如:从卡片在磁盘,这样当当前执行的进程完成时,将等候CPU。输出SPOOLing在打印之前首先复制打印文件,而非直接打印。在个人计算机上的输入SPOOLing很少,但输出SPOOLing非常普通。(操作系统第二版中文答案) 3、在早期的计算机中,每个字节的读写直接由CPU处理(既没有DMA)。对于多道程序而言这种组织方式有什么含义? 答:多道程序的主要原因是当等候I/O完成时CPU有事可做。如果没有DMA,I/O 操作时CPU被完全占有,因此,多道程序无利可图(至少在CPU利用方面)。无论程序操作多少I/O操作,CPU都是100%的忙碌。当然,这里是假定主要的延迟是数据复制时的等待。如果I/O很慢的话,CPU可以做其他工作。(操作系统第二版中文答案) 4、系列计算机的思想在20世纪60年代由IBM引入System/360大型机。现在这种思想已经消亡了还是继续活跃着? 答:它依然存在。例如:Interl以各种各样的不同的属性包括速度和能力消耗来产生Pentium I,II,III和4。所有这些机器的体系结构都是兼容的,仅仅是价格上的不同,这些都是家族思想的本质。(操作系统第二版中文答案) 5、缓慢采用GUI的一个原因是支持它的硬件的成本(高昂)。为了支持25行80列字符的单色文本屏幕应该需要多少视频RAM?对于1024*768像素24位色彩位图需要多少视频RAM?在1980年($5/KB)这些RAM的成本是多少?现在它的成本是多少? 答:25*80字符的单色文本屏幕需要2000字节的缓冲器。1024*768像素24位颜色的位图需要2359296字节。1980年代这两种选择将分别地耗费$10和$11520。而对于当前的价格。将少于$1/MB。(操作系统第二版中文答案) 8、考虑一个有两个CPU的系统,并且每个CPU有两个线程(超线程)。假设有三

操作系统第四版目录汤子瀛

由汤小丹、梁红兵、哲凤屏、汤子瀛编著的《计算机操作系统(第4版高等学校计算机类十二五规划教材)》对传统操作系统(0S)和现代操作系统均做了较为全面的介绍。全书共分12章:第一章为操作系统引论,介绍了OS的发展、传统0S和现代OS的特征及功能;第二和第三章深入阐述了进程和线程管理、进程同步、处理机调度和死锁;第四和第五章对连续和离散存储器管理方式及虚拟存储器进行了介绍;第六章自下而上地对I/0系统的各个层次做了较为系统的阐述;第七和第八章介绍了文件系统和磁盘存储器管理;第九章对用户接口以及接口的实现方法做了介绍;从第十章开始到第十二章是与目前0S发展现状紧密相关的内容,分别介绍了多处理机0S、网络OS、多媒体OS以及系统安全性。本教材可作为计算机类专业的本科生教材,也可作为研究生教材,还可供从事计算机及通信工作的相关科技人员参考。本教材内容基本覆盖了全国研究生招生考试操作系统课程考试大纲的主要内容,故也可作为考研的复习、辅导用书。 第一章操作系统引论 1.1 操作系统的目标和作用 1.2 操作系统的发展过程 1.3 操作系统的基本特性 1.4 操作系统的主要功能 1.5 OS结构设计 习题 第二章进程的描述与控制 2.1 前趋图和程序执行 2.2 进程的描述 2.3 进程控制 2.4 进程同步 2.5 经典进程的同步问题 2.6 进程通信 2.7 线程(Threads)的基本概念 2.8 线程的实现 习题 第三章处理机调度与死锁 3.1 处理机调度的层次和调度算法的目标 3.2 作业与作业调度 3.3 进程调度 3.4 实时调度 3.5 死锁概述 3.6 预防死锁 3.7 避免死锁 3.8 死锁的检测与解除 习题 第四章存储器管理 4.1 存储器的层次结构 4.2 程序的装入和链接 4.3 连续分配存储管理方式 4.4 对换(Swapping)

《现代操作系统》学习笔记.docx

《现代操作系统》看了两个多月才看了前面200页,很多都似懂非懂,权且将自己认为重要的概念抄下来,以备后续查看。 0. 概述 (1)操作系统的概念 对操作系统的定义,有两种说法,一种声称操作系统是计算机的扩展器,一种声称操作系统是计算机资源集的抽象。 所谓操作系统是计算机的扩展,是将操作系统当做计算机对外的接口。对外包括对应用程序,对程序员,对用户。操作系统对计算机进行“化妆”,将计算机“丑陋晦涩”的硬件对外隐藏,而向外呈现界面友好清晰,更易理解的操作系统。如下图所示:

所谓操作系统是计算机资源集的抽象,是指操作系统将计算机资源(处理器,存储器以及I/O设备等)进行抽象以及管理。将CPU处理抽象为进程,将内存抽象为地址空间,磁盘抽象成文件。而这一切抽象都是为了实现多道程序设计,即可以在一个计算机上同时运行多个互不干扰程序。 (2)操作系统的作用 操作系统的主要任务是在相互竞争的程序之间有序地控制对处理器、存储器以及其他I/O接口设备的分配。其主要任务包括管理资源分配,评估使用代价和调节资源分配的冲突,记录哪个程序在用什么资源,用多少,用多久。资源管理包括用以下两种不同方式实现多路复用:在时间上复用(进程调度:CPU 时间片轮转)和在空间上复用(内存管理:虚拟内存,页面置换;磁盘管理:文件系统)。在时间上分配CPU资源需要考虑该进程在上面运行多久,下一次切换到哪一个进程。在空间上分配存储空间需要考虑给每个进程分配多少内存,如果内存不足的时候,将哪个页面置换到磁盘以腾出空间。 操作系统的主要功能:为用户程序提供抽象和管理计算机资源。用户程序和操作系统之间的交互处理是前者。用户程序和操作系统之间的交互主要是处理抽象。对于管理计算机资源系统(进程调度,内存置换等)一般自动完成。所以主要是用户程序与操作系统的交互。用户程序通过操作系统提供的接口来访问底层的系统。操作系统提供一种特殊的过程调用——系统调用,该种过程调用可以由用户态陷入

现代操作系统第二章习题

现代操作系统第二章习题 14、在用户空间实现线程,其最大的优点是什么,最大的缺点是什么, 答:在用户空间实现线程,其最大的优点是线程切换至少比陷入内核要快一个数量级;最大的缺点是程序员通常在经常发生线程阻塞的应用中才希望使用多个线程。 21、在使用线程的系统中,若使用用户级线程,是每个线程一个堆栈还是每个进程一个堆栈,如果使用内核级线程情况又如何呢,请给予解释。 答:在使用线程的系统中,无论使用用户级线程还是使用内核级线程,都有专用的线程表来管理线程,专用的进程表管理进程;而且,进程表总是存放在内核空间中。不同的是用户级线程中线程表存放在用户空间中,内核对其一无所知;内核级线程中线程表存放在内核空间中,所有能够阻塞线程的调用都已系统调用的形式实现。 37、有5个批处理作业A到E,它们几乎同时到达一个计算中心。估计它们的运行时间分别为10,6,2,4和8分钟。其优先级(由外部设定)分别为3,5,2,1和4,其中5为最高优先级。对于下列每种调度方法,计算其平均进程周转时间,可忽略进程切换的开销。 A)轮转法。 B)优先级调度 C)先来先服务(按照10,6,2,4,8次序运行) D)最短作业优先 对A)假设系统具有多道程序处理能力,每个作业均公平CPU时间,对于B)到D),假设任一时刻只有一个程序运行,直到结束。所有的作业都完全是CPU密集型作业。答:对于A,平均公平CPU时间为(10+6+2+4+8)/5=6,如下表所示,平均进程周转时间为(28+12+14+18+30)/5=20.4 作业运行时间段周转时间 A(10) 0-6 24-28 28

B(6) 6-12 12 C(2) 12-14 14 D(4) 14-18 18 E(8) 18-24 28-30 30 对于B,如下表所示,平均进程周转时间为(24+6+26+30+14)/5=20 作业优先级运行时间段周转时间 A(10) 3 14-24 24 B(6) 5 0-6 6 C(2) 2 24-26 26 D(4) 1 26-30 30 E(8) 4 6-14 14 对于C,如下表所示,平均进程周转时间为(10+16+18+22+30)/5=19.2 作业运行时间段周转时间 A(10) 0-10 10 B(6) 10-16 16 C(2) 16-18 18 D(4) 18-22 22 E(8) 22-30 30 对于D,如下表所示,平均进程周转时间为(30+12+2+6+20)/5=14 作业运行时间段周转时间 A(10) 20-30 30 B(6) 6-12 12 C(2) 0-2 2 D(4) 2-6 6 E(8) 12-20 20 41、一个软实时系统有4个周期时间,其周期分别为50ms,100ms,200ms和250ms。假设这4个事件分别需要35ms,20ms,10ms和x ms的CPU时间。保持系统可调度的最大x值是多少, 答:保持系统可调度的条件是所有周期时间所需CPU时间与其周期时间的比的和不大于1,即35/50+20/100+10/200+x/250<=1,解出x为12.5 ms

相关主题
文本预览
相关文档 最新文档