1. Explain how virtual addresses are mapped on to physical addresses.
Let’s assume that we have 1M of RAM. RAM is also called physical memory. We can subdivide the RAM into 4K pages. Thus 1M / 4K = 256 pages. Thus, our RAM has 256 physical pages, weach holding 4K. Let’s assume we have 10 M of disk. Thus, we have 2560 disk pages. In principle, each program may have up to 4 G of address space. Thus, it can, in principle, access 220 virtual pages. In reality, many of those pages are considered invalid pages.
How is an address translated from virtual to physical? First, like the cache, we split up a 32 bit virtual address into a virtual page (which is like a tag) and a page offset.
If this looks a lot like a fully-associative cache, but whose offset is much much larger, it’s because that’s basically what it is.
We must convert the virtual page number to a physical page number. In our example, the virtual page consists of 20 bits. A page table is a data structure which consists of 220 page table entries (PTEs). Think of the page table as an array of page table entries, indexed by the virtual page number.
The page table’s index starts at 0, and ends at 220 – 1.
Here’s how it looks:
Suppose your program generated the following virtual address F0F0F0F0hex (which is 1111 0000 1111 0000 1111 0000 1111 0000 two). First, you would split the address into a virtual page, and a page offset (see below).
Then, you’d see if the virtual page had a corresponding physical page in RAM using the page table. If the valid bit of the PTE is 1, then you’d translate the virtual page to a physical page, and append the page offset. That would give you a physical address in RAM.
2. Discuss the Optimal Page Replacement algorithm.
The theoretically optimal page replacement algorithm , or Belady’s optimal page replacement policy is an algorithm that works as follows: when a page needs to be swapped in, the operating system swaps out the page whose next use will occur farthest in the future. For example, a page that is not going to be used for the next 6 seconds will be swapped out over a page that is going to be used within the next 0.4 seconds.
This algorithm cannot be implemented in the general purpose operating system because it is impossible to compute reliably how long it will be before a page is going to be used, except when all software that will run on a system is either known beforehand and is amenable to the static analysis of its memory reference patterns, or only a class of applications allowing run-time analysis is allowed. Despite this limitation, algorithms exist] that can offer near-optimal performance — the operating system keeps track of all pages referenced by the program, and it uses those data to decide which pages to swap in and out on subsequent runs. This algorithm can offer near-optimal performance, but not on the first run of a program, and only if the program’s memory reference pattern is relatively consistent each time it runs.
3. Explain the concept of File.
A file system (often also written as filesystem) is a method of storing and organizing computer files and their data. Essentially, it organizes these files into a database for the storage, organization, manipulation, and retrieval by the computer’s operating system. ost file systems make use of an underlying data storage device that offers access to an array of fixed-size physical sectors, generally a power of 2 in size (512 bytes or 1, 2, or 4 KiB are most common). The file system is responsible for organizing these sectors into files and directories, and keeping track of which sectors belong to which file and which are not being used. Most file systems address data in fixed-sized units called “clusters” or “blocks” which contain a certain number of disk sectors (usually 1-64). This is the smallest amount of disk space that can be allocated to hold a file.
However, file systems need not make use of a storage device at all. A file system can be used to organize and represent access to any data, whether it’s stored or dynamically generated
4. Explain the execution of RPC.
Remote procedure call (RPC) is an Inter-process communication technology that allows a computer program to cause a subroutine or procedure to execute in another address space (commonly on another computer on a shared network) without the programmer explicitly coding the details for this remote interaction. That is, the programmer would write essentially the same code whether the subroutine is local to the executing program, or remote. When the software in question is written using object-oriented principles, RPC may be referred to as remote invocation or remote method invocation.
A RPC is initiated by the client sending a request message to a known remote server in order to execute a specified procedure using supplied parameters. A response is returned to the client where the application continues along with its process. There are many variations and subtleties in various implementations, resulting in a variety of different (incompatible) RPC protocols. While the server is processing the call, the client is blocked (it waits until the server has finished processing before resuming execution).
An important difference between remote procedure calls and local calls is that remote calls can fail because of unpredictable network problems. Also, callers generally must deal with such failures without knowing whether the remote procedure was actually invoked. Idempotent procedures (those which have no additional effects if called more than once) are easily handled, but enough difficulties remain that code which calls remote procedures is often confined to carefully written low-level subsystems.
5. Write a note on computer virus.
A computer virus is a computer program that can copy itself and infect a computer. The term “virus” is also commonly but erroneously used to refer to other types of malware, including but not limited to adware and spyware programs that do not have the reproductive ability. A true virus can spread from one computer to another (in some form of executable code) when its host is taken to the target computer; for instance because a user sent it over a network or the Internet, or carried it on a removable medium such as a floppy disk, CD, DVD, or USB drive. Viruses can increase their chances of spreading to other computers by infecting files on a network file system or a file system that is accessed by another computer.
As stated above, the term “computer virus” is sometimes used as a catch-all phrase to include all types of malware, even those that do not have the reproductive ability. Malware includes computer viruses, computer worms, Trojan horses, most rootkits, spyware, dishonest adware and other malicious and unwanted software, including true viruses. Viruses are sometimes confused with worms and Trojan horses, which are technically different. A worm can exploit security vulnerabilities to spread itself automatically to other computers through networks, while a Trojan horse is a program that appears harmless but hides malicious functions. Worms and Trojan horses, like viruses, may harm a computer system’s data or performance. Some viruses and other malware have symptoms noticeable to the computer user, but many are surreptitious or simply do nothing to call attention to themselves. Some viruses do nothing beyond reproducing themselves.
6. Explain first-fit, best-fit and worst-fit allocation algorithms with an example.
First Fit – A resource allocation scheme (usually for memory). First Fit fits data into memory by scanning from the beginning of available memory to the end, until the first free space which is at least big enough to accept the data is found. This space is then allocated to the data. Any left over becomes a smaller, separate free space.
Best Fit – A resource allocation scheme (usually for memory). Best Fit tries to determine the best place to put the new data. The definition of ‘best’ may differ between implementations, but one example might be to try and minimise the wasted space at the end of the block being allocated – i.e. use the smallest space which is big enough.
worst fit The allocation policy that always allocates from the largest free block. Commonly implemented using a size-ordered free block chain (largest first).In practice, this tends to work quite badly because it eliminates all large blocks, so large requests cannot be met.
Given five memory partitions of 100 KB, 500 KB, 200 KB, 300 KB, and 600 KB (in order), how would each of the first-fit, best-fit, and worst-fit algorithms place processes of 212 KB, 417 KB, 112 KB, and 426 KB (in order)? Which algorithm makes the most efficient use of memory?
(b) 212K is put in 500K partition
(c) 417K is put in 600K partition
(d) 112K is put in 288K partition (new partition 288K = 500K – 212K)
(e) 426K must wait
(g) 212K is put in 300K partition
(h) 417K is put in 500K partition
(i) 112K is put in 200K partition
(j) 426K is put in 600K partition
(l) 212K is put in 600K partition
(m) 417K is put in 500K partition
(n) 112K is put in 388K partition
(o) 426K must wait
In this example, best-fit turns out to be the best.
7. Explain demand paging virtual memory system.
Demand paging follows that pages should only be brought into memory if the executing process demands them. This is often referred to as lazy evaluation as only those pages demanded by the process are swapped from secondary storage to main memory. Contrast this to pure swapping, where all memory for a process is swapped from secondary storage to main memory during the process startup.
When a process is to be swapped into main memory for processing, the pager guesses which pages will be used prior to the process being swapped out again. The pager will only load these pages into memory. This process avoids loading pages that are unlikely to be used and focuses on pages needed during the current process execution period. Therefore, not only is unnecessary page load during swapping avoided but we also try to preempt which pages will be needed and avoid loading pages during execution.
Commonly, to achieve this process a page table implementation is used. The page table maps logical memory to physical memory. The page table uses a bitwise operator to mark if a page is valid or invalid. A valid page is one that currently resides in main memory. An invalid page is one that currently resides in secondary memory
8. Explain different operations possible on Files.
Files on a computer can be created, moved, modified, grown, shrunk and deleted. In most cases, computer programs that are executed on the computer handle these operations, but the user of a computer can also manipulate files if necessary. For instance, Microsoft Word files are normally created and modified by the Microsoft Word program in response to user commands, but the user can also move, rename, or delete these files directly by using a file manager program such as Windows Explorer (on Windows computers).
Although the way programs manipulate files varies according to the operating system and file system involved, the following operations are typical:
Creating a file with a given name
Setting attributes that control operations on the file
Opening a file to use its contents
Reading or updating the contents
Committing updated contents to durable storage
Closing the file, thereby losing access until it is opened again
9. Explain how applications and data can be distributed.
Here are two main reasons for using distributed systems and distributed computing. First, the very nature of the application may require the use of a communication network that connects several computers. For example, data is produced in one physical location and it is needed in another location.
Second, there are many cases in which the use of a single computer would be possible in principle, but the use of a distributed system is beneficial for practical reasons. For example, it may be more cost-efficient to obtain the desired level of performance by using a cluster of several low-end computers, in comparison with a single high-end computer. A distributed system can be more reliable than a non-distributed system, as there is no single point of failure. Moreover, a distributed system may be easier to expand and manage than a monolithic uniprocessor system
The graph G is the structure of the computer network. There is one computer for each node of G and one communication link for each edge of G. Initially, each computer only knows about its immediate neighbours in the graph G; the computers must exchange messages with each other to discover more about the structure of G. Each computer must produce its own colour as output.
The main focus is on coordinating the operation of an arbitrary distributed system.
While the field of parallel algorithms has a different focus than the field of distributed algorithms, there is a lot of interaction between the two fields
10. How the protection is implemented using the concept of domains?
System resources need to be protected. Resources include both hardware and software. Different mechanisms for protection are as used.
The operating system defines the concept of a domain .A domain consists of objects and access rights of these objects. A Subject then gets associated with the domain and access to object in the domain .A domain is a set of access rights for associated objects and a asystem consists of many such domains.A user process alwaya executes in any one of the domians. Domain switching is also possible.In reality domain is a user with a specific id having different access right for different objects such as files,directories and devices.Processes created by the user inherit all access rights for that user.
On Windows Server Systems, a domain controller (DC) is a server that responds to security authentication requests (logging in, checking permissions, etc.) within the Windows Server domain. A domain is a concept introduced in Windows NT whereby a user may be granted access to a number of computer resources with the use of a single username and password combination.