School of Computing  |  Napier  |  Cisco  |  SFC  |  NOS  |  Code Snippets 

NOS Paper Answers (2000/2001)

NAPIER UNIVERSITY

SCHOOL OF COMPUTING

Level III SESSION 2000/2001

Duration: 2 hours


Network Operating Systems


MODULE NO: CO32010

There are SIX questions in this paper


Attempt THREE questions.


There are x pages in this paper.


Examiner: W.Buchanan



1. (a) Contrast the different methods that an operating system scheduler can undertake, and their main characteristics. Which is likely to give the smoothest scheduling, and which is likely to be the most efficient on processor time. Give reason(s) for your choice. Also which scheduling does Windows NT and UNIX use?
(15)
(b) Outline the usage of signals in UNIX and contrast the usage of signals in UNIX with messages in Microsoft Windows. What is the major problem with signals, and how might it be overcome?
(10)

Total Marks [25]
Answer

Sample answer
(a)


1. First-Come, First-Served (FCFS). This type of scheduling is used with non-pre-emptive operating systems [1], where the time that a process waits in the queue is totally dependent on the processes which are in front of it [1]. The response of the system is thus not dependable. [1]
2. Shortest-Job-First (SJF). This is one of the most efficient algorithms [1], and involves estimating the amount of time that a process will run for [1], and taking the process which will take the shortest time to complete [1].
3. Priority Scheduling. This is the typical used in general-purpose operating systems (Microsoft Windows and UNIX). It can be used with either pre-emptive or non-pre-emptive operating systems [1]. The main problem is to assign a priority to each process, where priorities can either be internal or external [1]. Internal priorities related to measurable system resources, such as time limits, memory requirements, file input/output, and so on. A problem is that some processes might never get the required priority and may never get time on the processor. To overcome this, low-priority waiting processes have their priority increased, over time (known as ageing) [1].
4. Round-Robin (RR). This is first-come, first-served with pre-emption, where each process is given a finite time slice on the processor [1]. The queue is circular [1], thus when a process has been run, the queue pointer is moved to the next process. This is a relatively smooth schedule and gives all processes a share of the processor [1].
5. Multilevel Queue Scheduling. This scheme supports several different queues [1], and sets priorities for them [1]. For example, a system could run two different queues: foreground (interactive) and background (batch). The foreground task could be given a much higher priority over the background task, such as 80%-20%. Each of the queues can be assigned different priorities [1]. Windows NT runs a pre-emptive scheme where certain system processes are given a higher priority than other non-system processes.

(b)

UNIX uses signals in a similar way to interrupts, but they are implemented at a higher-level. Events rather than hardware devices generate these interrupts [1]. Typically, software interrupt service routines are called on certain signals. The signal handler sets the status of a process [1], but a process may also put itself in a sleep mode, waiting for a signal.

Messages is the best method of interprocess communication [1]. These allow information on the actual process to be passed between processes. These messages can be of a fixed length, but are most generally of any length, and typically are unstructured [1]. This is the method that Microsoft Windows uses to pass data between processes.

In a message system, each process communicates with a port (or message port) [1], which is a data structure in which messages are sent to [1]. Most systems have a single port, but others can have several message ports. In most cases, the system implements two system calls: SEND and RECEIVE [1].

One problem with this is that signals may get lost if they are sent before the process goes into a waiting mode [1]. One solution is to set a flag in the process whenever it receives a signal [1]. The process can then test this flag before it goes into the wait state, if it is set; the wait operation does not block the process [1].



2. (a) Outline the main methods that are used in interprocess communications (IPC), and identify any major problems which occur with the methods.
(12)
(b) Explain the two main methods that cause deadlock. Also, define the four main conditions that must occur for deadlock to happen. Finally briefly outline how the Bankers Algorithm can be used to avoid deadlock.
(13)
Total Marks [25]
Answer

Sample answer
(a)

· Named pipe. A named pipe uses a pipe which has a specific name for the pipe. Unlike unnamed pipes, a named pipe can be used in processes that do not have a shared common process origin. Typically a named pipe is known as a FIFO (first in, first out), as the data written to a pipe is read first. [2]
· Pipes. Pipes allow data to flow from one process to another. Data only flows in the one direction, typically from the output of one process to the input of another. The data from the output is buffered until the input process receives it. In UNIX the single vertical bar character (|) is used to represent a pipe, and operates in a similar way to a pipe system call in a program. Two-way communication can be constructed with two pipes, one for each direction. Pipes must have a common process origin. [2]
· Message queuing. Message queues allow processes to pass messages between themselves, using either a single message queue or several message queues. The system kernel manages each message queue, and puts messages on the queue which gives the identity of the message (message type). Messages can vary in length and be assigned different types or usages. A message queue can be created by one process and used by multiple processes that read and/or write messages to the queue. Application programs (or their processes) create message queues and send and receive messages using an application program interface (API). [2]
· Semaphores. These will be discussed in this chapter, and are used to synchronize events between processes. They are integer values which are greater than or equal to zero. A zero value puts a process to sleep, and a non-zero value causes a sleeping process to awaken (when certain operations are performed). A signal is sent to a semaphore when an event occurs which then increments the semaphore. [2]
· Shared memory. Shared memory allows processes to interchange data through a defined area of memory. For example, one process could write to an area of memory and another could read from it. To do this the writing process must check to see if the read process is reading from the memory, at the time, and vice-versa. If these are occurring the other process must wait for the other process to complete. This is implemented using semaphores, where only one process is allowed to access the memory at a time. [2]
· Sockets. These are typically used to communicate over a network, between a client and a server (although peer-to-peer connections are also possible). Sockets are end points of a connection, and allow for a standard connection which is computer and operating system independent. [2]

(b)

1. Resource locking. This is where process is waiting for a resource which will never become available. Some resources are pre-emptive, where processes can release their access on them, and give other processes a chance to access them, but others are non-pre-emptive, where a process must be given full rights to the resource, and no other processes can get access to it until the currently assigned process is finished with it. An example of this is with the transmission and reception of data on a communication system. It would not be a good idea for a process to send some data that required data to be received, in return, to yield to another process which also wanted to send and receive data. The non-pre-emptive resources would thus be locked so that no other processes could access it. This can cause a problem when the resource which is accessing the resource never gets the event which will release the lock, or if the process crashes. [3]
2. Starvation. This is where other processes are run, and the deadlocked process is not given enough time to catch the required event. This can occur when processes have a low priority compared with other ones. The higher priority tasks tend to have better chances to hog the required resources. [3]

The four conditions that must occur for deadlock to occur are:

· Mutual exclusion condition. This is where processes get exclusive control of required resources, and will not yield the resource to any other process. [1]
· Wait for condition. This is where processes keep exclusive control of acquired resources while waiting for additional resources. [1]
· No pre-emption condition. This is where resources cannot be removed from the processes which have gained them, until they have completed their access on them. [1]
· Circular wait condition. This is a circular chain of processes on which each process holds one or more resources that are requested by the next process in the chain. [1]

In this algorithm, the operating system has a number of resources of a given type (N), which are allocated in a number of users (M). The operating system is told by each process the maximum number of resources that it requires (n), which must be less than N. The operating system gives access to one of the resources of a process, one at a time. Thus processes can be guaranteed access to one of the resources within a given time. The condition is seen as safe if one of the processes can complete with the amount of resources that are left unallocated. [3]

3. (a) Contrast distance-vector routing protocols with link-state routing protocols. What methods may a distance-vector protocol use to determine the best route?
(12)
(b) Define and explain the two major problems with distance-vector routing protocols, and the methods that can be used to overcome these problems.
(13)
Total Marks [25]
Answer

(a)

· Distance-vector. Distance-vector routing uses a distance-vector algorithm (such as the Bellman-Ford routing algorithm), which uses a direction (vector) and distance to any link in the internetwork to determine the best route. Each router periodically sends information to each of its neighbors on the cost that it takes to get to a distance node. Typically this cost relates to the hop count (as with RIP). The main problem with distance-vector is that updates to the network are step-by-step, and it has high bandwidth requirements as each router sends its complete routing table to all of its neighbors at regular intervals.
· Link-state. Link-state involves each router building up the complete topology of the entire internetwork (or at least of the partition on which the router is situated), thus each router contains the same information. With this method, routers only send information to all of the other routers when there is a change in the topology of the network. Link state is also known as shortest path first. Typical link-state protocols are OSPF, BGP and EGP. With OSPF, each router builds a hierarchical topology of the internetwork, with itself at the top of the tree. The main problem with link-state is that routers require much more processing power to update the database, and more memory as routers require to build a database with details of all the routers on the network.

Methods a link-state method may use:

· Bandwidth. The data capacity of a link, which is typically defined in bps.
· Delay. The amount of time that is required to send a packet from the source to a destination.
· Load. A measure of the amount of activity on a route.
· Reliability. Relates to the error rate of the link.
· Hop count. Defined by the number of routers that it takes between the current router and the destination.
· Ticks. Defines the delay of a link by a number of ticks of a clock.
· Cost. An arbitrary value which defines the cost of a link, such as financial expense, bandwidth, and so on.
[12]

(b)
· Routing Loops. These occur when slow convergence causes inconsistent routing entities when a new configuration occurs (Figure 16.4). In this case, Network A becomes unavailable. Router V will report this to Router Y, which will then report to Router Z and Router X. Both Routers X and Z will stop routing to Network A. Unfortunately Router W still thinks it can reach Network A with 3 hops, thus Router Z will receive information that says that Router W can get to Network A in 3 hops, and that it is unreachable from Router Y. Thus Router Z updates its routing table so that Network A is reachable in 4 hops, and that the next router to the destination is Router W. Router Z will then send its updated information to Router Y which informs it that there is a path to Network A from Router Z to Router W, and so on. Router Y will then inform Router X, and so on. Thus any data packet which is destined for Network A will now loop around the loop from Router Z to Router W to Router X to Router Y to Router Z, and so on.
· Counting to Infinity. Data packets can loop around forever, because of incorrect routing information. In this loop, the distance-vector of the hop count will increment each time the packet goes through a router.

[6]

To overcome these problems:

· Setting infinity values. The count-to-infinity will eventually resolve itself when the routers have counted to infinity (as infinity will be constrained with the maximum definable value), but while the network is counting to this value, the routing information will be incorrect. To reduce the time that it takes to get to this maximum, a maximum value is normally defined. In RIP this value is set at 16 hops for hop-count distance-vectors, thus the maximum number of hops that can occur is 15. This leads to a problem in that a destination which has a distance of more than 15 hops is unreachable, as a value of 16 or more defines that the network is unreachable.
· Split horizon. This method tries to overcome routing loops. With this routers do not update their routing table with information on a destination if they know that the network is already connected to the router (that is, the router knows more about the state of the network than any other router, as it connects to it).
· Hold-Down Timers. This method overcomes the count-to-infinity problem. With a hold-time time, a router starts a hold-time timer when it receives an update from a neighbour indicating that a previously accessible network is now inaccessible. It also marks the route as inaccessible

[7]

4. (a) Outline the main disadvantages with Novell NetWare 3.1, and how these have been overcome in Novell NetWare 4.1/NDS. Also, discuss the main enhanced features of Novell NetWare 4.1/NDS.
(13)
(b) Explain how NDS uses a 32-bit time stamp to record the time of events. Also explain the different types of time servers that can be set up on a NetWare 4.1 network.
(12)
Total Marks [25]
Sample answer

The main disadvantages of NetWare 3.x are:

· It uses SPX/IPX which is incompatible with TCP/IP traffic.
· It is difficult to synchronise servers with user information.
· The file structure is local to individual servers.
· Server architecture is flat and cannot be organised into a hierarchical structure.
[4]

These have been addressed with NetWare 4.1, in which the bindery has been replaced by Novell Directory Services (NDS). Its main characteristics are:

· Hierarchical server structure.
· Network-wide users and groups.
· Global objects. NDS integrates users, groups, printers, servers, volumes and other physical resources into a hierarchical tree structure.
· System-wide login with a single password. This allows users to access resources which are connected to remote servers.
· NDS processes logins between NetWare 3.1 and NetWare 4.1 servers, if the login names and passwords are the same.
· Supports distributed file system.
· Synchronisation services. NDS allows for directory synchronisation, which allows directories to be mirrored on different partitions or different servers. This provides increased reliability in that if a server develops a fault then the files on that server can be replicated by another server.
· Standardised organisational structure for applications, printers, servers and services. This provides a common structure across different organisations.
· Support for NFS server for UNIX resources.
[9]



(b)
The PC contains a 32-bit counter which is updated every second and is reference to the 1st January 1970 (the starting date for the PC) [1]. This provides for 4,294,967,296 seconds. The format of the NDS timestamp uses this format and adds other fields to define the place the event occurred and an indication of events that occur within a single second. It uses 64 bits and its format is:

· Seconds (32 bits). This stores the number of seconds since 1/1/1970. This allows for 4 billion seconds, which is approximately 1371 years, before a roll-over occurs. [1]
· Replica Number (16 bits). This is a unique number which defines where the event occurred and the timestamp issued. [1]
· Event ID (16 bits). Defines each event that occurs within a second a different Event ID. This is requires as many events can occur within a single second. This value is reset on every second, and thus allows up to 65,536 events each second. [2]

NDS always uses the most recently time stamped object or properties for any updates. When an object is deleted its property is marked as "not present". It will only be deleted once the replica synchronisation process propagates the change to all other replicas. [1]

Time server types

NetWare 4.1 servers are set up as time servers when they are installed. They can either be:

· Primary time servers. A primary time server provides time information to others, but must contact at least primary (or reference) server for their own time. [1]
· Secondary time servers. These are time consumers, which receive their time from other servers (such as from a primary, reference or single reference time server). [1]
· Reference time servers. These servers do not need to contact any other servers, and provide a time source for other primary time servers. This is a good option where there is a large network, as the primary time servers can provide local time information (this is called a time provider group). [2]
· Single reference time servers. These servers do not need to contact other time servers to get their own time and are used as a single source of time. This is normally used in a small network, where there is a single reference time server with one or more secondary time servers. The single reference time server and reference time server normally get their local time information from another source, such as Internet time, radio or satellite time. [2]



5. (a) Explain how NFS creates a networked file system. Also, outline the main components of NFS and how they fit into the OSI model.
(12)
(b) Explain how NIS is used to create a network configuration, and the files that are used. How does this method improve network security and the operation of a network? Also explain the operation of primary and secondary NIS servers. Why is it important to have secondary NIS servers?
(13)
Total Marks [25]

Sample answer

(a)

NFS uses a client/server architecture where a computer can act as an NFS client, an NFS server or both. An NFS client makes requests to access data and files on servers; the server then makes that specific resource available to the client. [1]
NFS servers are passive and stateless. They wait for requests from clients and they maintain no information on the client. One advantage of servers being stateless is that it is possible to reboot servers without adverse consequences to the client. [1]
The components of NFS are as follows:

· NFS remote file access may be accompanied by network information service (NIS). [1]
· External data representation (XDR) is a universal data representation used by all nodes. It provides a common data representation if applications are to run transparently on a heterogeneous network or if data is to be shared among heterogeneous systems. Each node translates machine-dependent data formats to XDR format when sending and translating data. It is XDR that enables heterogeneous nodes and operating systems to communicate with each other over the network. [2]
· Remote Procedure Call (RPC) provides the ability for clients to transparently execute procedures on remote systems of the network. NFS services run on top of the RPC, which corresponds to the session layer of the OSI model. [2]
· Network lock manager (rpc.lockd) allows users to co-ordinate and control access to information on the network. It supports file locking and synchronises access to shared files. [2]

Figure 1 shows how the protocols fit into the OSI model.


Figure 1 NFS services protocol stack
[3]
(b)
NIS allows a user should be able to log into any computer within a domain, by maintaining a global password and configuration files. NIS allows the system manager to centralize the key configuration files on a single master server. If anyone wants to log into the network the master server is consulted (or one of its slave servers). Figure Q5.1 illustrates some of the files that the server maintains; these include password (which contains the passwords for all the users within the domain), and groups (the group that the user is associated with). It is thus easy for the system administrator to add and delete users from the NIS server, and these changes will be reflected over the domain. A user cannot log into any of the clients, without the client checking with the server to see if they have a valid login and password.
[7]


Figure Q5.1 NIS domain

With NIS, a single node on a network acts as the NIS master server, with a number of NIS slave servers, which receive their NIS information from the master server. The slaves are important in that they hold copies of the most up-to-date version of the NIS database, so if the master were to crash, or become uncontactable, the slaves could still provide password, group, and other NIS information to the clients in the domain. The slaves also relieve the workload on the master, as it may become busy responding to many NIS requests. When a client first starts up it sends out a broadcast to all NIS servers (master or slaves) on the network and waits for the first one to respond. The client then binds to the first that responds and addresses all NIS requests to that server. If this server becomes inoperative then an NIS client will automatically rebind to the first NIS server which responds to another broadcast. Figure Q5.2 illustrates this.
[5]



Figure Q5.2 NIS domain


6. (a) Explain typical methods that a networking operating system may use to create a robust networked data storage environment. Highlight, possibly with a basic example, the method that RAID 5 uses to protect data against a hard disk failure.
(13)
(b) Contrast the security model for a FAT-based system, a UNIX-based system and a Windows NT system.
(12)
Total Marks [25]

Sample answer

(a)

Main methods:

· Disk mirroring. Network servers normally support disk mirroring which protects against hard disk failure. It uses two partitions on different disk drives which are connected to the same controller. Data written to the first (primary) partition is mirrored automatically to the secondary partition. If the primary disk fails then the system uses the partition on the secondary disk. Mirroring also allows unallocated space on the primary drive to be allocated to the secondary drive. On a disk mirroring system the primary and secondary partitions have the same drive letter (such as C: or D:) and users are unaware that disks are being mirrored. [3]
· Disk duplexing. Disk duplexing means that mirrored pairs are controlled by different controllers. This provides for fault tolerance on both disk and controller. Unfortunately, it does not support multiple controllers connected to a single disk drive. [2]
· Striping with parity. Network servers normally support disk striping with parity. This technique is based on RAID 5 (Redundant Array of Inexpensive Disks), where a number of partitions on different disks are combined to make one large logical drive. Data is written in stripes across all of the disk drives and additional parity bits. For example, if a system has four disk drives then data is written to the first three disks and the parity is written to the fourth drive. Typically the stripe is 64KB, thus 64KB will be written to Drive 1, the same to Drive 2 and Drive 3, then the parity of the other three to the fourth. The following example illustrates the concept of RAID where a system writes the data 110, 000, 111, 100 to the first three drives, which gives parity bits of 1, 1, 0 and 0. If one of the disk drives fails then the addition of the parity bit allows the bits on the failed disk to be recovered. For example, if disk 3 fails then the bits from the other disk are simply XOR-ed together to generate the bits from the failed drive. If the data on the other disk drives is 111 then the recovered data gives 0, 001 gives 0, and so on.

Disk 1 Disk 2 Disk 3 Disk 4 (Odd parity)
1 1 0 1
0 0 0 1
1 1 1 0
1 0 0 0

The 64KB stripes of data are also interleaved across the disks. The parity block is written to the first disk drive, then in the next block to the second, and so on. A system with four disk drives would store the following data:

Disk 1 Disk 2 Disk 3 Disk 4
Parity block 1 Data block A Data block B Data block C
Data block D Parity block 2 Data block E Data block F
Data block G Data block H Parity block 3 Data block I
[8]

(b)

Student should generally discuss the main properties on objects in the system, such as:

· FAT does not allow for objects, and only uses read, write and hidden and does not have owners, groups or system administrators.
· UNIX supports file system objects and uses read (r), write (w) and execute (x), for three main types: rwx (user), rwx (group) and rwx (world).
· NT uses objects with attributes of Full Control, Read, List, Add, Change and Change Permissions for OWNER, GROUP, User ACL, and System ACL.

NT allows for more control, and different types of users to be created, but UNIX is possibly simpler to understand.
[12]

 

 

 

 

School of Computing  |  Napier  |  Cisco  |  SFC  |  NOS  |  Code Snippets

 

Page maintained by bill