Past Question 2079 View
Tribhuvan University
Institute of Science and Technology
2079
Bachelor Level / fourth-semester / Science
Computer Science and Information Technology( CSC264 )
Operating System
Full Marks: 60 + 20 + 20
Pass Marks: 24 + 8 + 8
Time: 3 Hours
Candidates are required to give their answers in their own
words as far as practicable.
The figures in the margin indicate full marks.
Section A
Long Answer Question
1Discuss about single level and two level directory system.
Consider the following process and answer the following questions.
Process |
Allocation |
Max |
Available |
A B C
D |
A B C
D |
A B C
D |
|
P0 |
0 0
1 2 |
0 0
1 2 |
1 5
2 0 |
P1 |
1 0
0 0 |
1 7
5 0 |
|
P2 |
1 3
5 4 |
2 3
5 6 |
|
P3 |
0 6
3 2 |
0 6
5 2 |
|
P4 |
0 0 1
4 |
0 6 5
6 |
a. What is the content of matrix Need?
b. Is the system in safe state?
c. If P1 request (0,4,2,0) can the request be granted
immediately.
2When does race condition occur in inter process
communication? What does busy waiting mean and how it can be handled using
sleep and wakeup strategy?
3Define shell and system call. suppose a disk has 201
cylinders, numbered from 0 to 200. At same time the disk arm is at
cylinder 95, and there is a queue of disk access requests for cylinders
82,170,43,140,24,16 and 190. Calculate the seek time for the disk scheduling
algorithm FCFS,SSTF,SCAN and C-SCAN.
Section B
Short Answer Questions
4Distinguish between starvation and deadlock . How does the
system schedule process using multiple queues?
5List any two demerits of disabling interrupt to achieve
mutual exclusion. Describe about fixed and variable partitioning
6For the following dataset, compute average waiting time for
SRTN and SJF.
Process |
Arrival Time |
Burst Time |
P0 |
0 |
7 |
P1 |
2 |
4 |
P2 |
4 |
1 |
P3 |
5 |
4 |
7Discuss the advantages disadvantages of implementing file
system using Linked List.
8Consider the page references 7,0,1,2,0,3,0,4,2,3,0,3,2,
Find the number of page fault using OPR and FIFO, with 4 page frame.
9Describe the working mechanism of DMA.
10What is the task of disk controller ? List some drawback
of segmentation.
11Write the structure and advantages of TLB.
12Why do we need the concept of locality of reference ? List
the advantages and disadvantages of Round Robin algorithm.
Past Question 2081 View
Tribhuvan University
Institute of Science and Technology
2081
Bachelor Level / fourth-semester / Science
Computer Science and Information Technology( CSC264 )
Operating System
Full Marks: 60 + 20 + 20
Pass Marks: 24 + 8 + 8
Time: 3 Hours
Candidates are required to give their answers in their own
words as far as practicable.
The figures in the margin indicate full marks.
SECTION A
Attempt any TWO questions.
1. Explain the translation of logical address into physical
address using segment table with necessary diagram. List advantages and
disadvantages of segmentation.
2. Find the seek time using SCAN, C-SCAN, Look and C-Look
disk scheduling algorithms for processing the following request queue:
35, 70, 45, 15, 65, 20, 80, 90, 75, 130.
Suppose the disk has tracks numbered from 0 to 150 and assume the disk arm to
be at 30 and moving outward.
3. Explain the Sleeping Barber problem. Illustrate on how it
can be solved.
SECTION B
Attempt any EIGHT questions.
4. Explain microkernels and exokernels.
5. Consider a swapping system in which memory consists of
the following hole sizes in memory order:
15 MB, 2 MB, 10 MB, 6 MB, 8 MB and 20 MB.
Which hole is taken for successive segment requests of:
(a) 10 MB
(b) 10 MB
For first fit, next fit and best fit.
6. Explain how semaphore solves the problem of critical
section.
7. How do you think deadlock can be avoided? Explain.
8. Explain Inter-Process Communication in Linux.
9. List different file structures and explain them.
10. Calculate the average waiting time and turnaround time
using priority algorithm (Priority 1 being the highest) for the given scenario:
PID |
Brust Time |
Arrival Time |
Priority |
A |
3 |
0 |
3 |
B |
2 |
2 |
3 |
C |
4 |
3 |
2 |
D |
2 |
3 |
1 |
11. Explain memory-mapped I/O.
12. Write short notes on:
- Virtual
Memory
- Race
Condition
Past Question 2080 View
Tribhuvan University
Institute of Science and Technology
2080-new
Bachelor Level / fourth-semester / Science
Computer Science and Information Technology( CSC264 )
Operating System
Full Marks: 60 + 20 + 20
Pass Marks: 24 + 8 + 8
Time: 3 Hours
Candidates are required to give their answers in their own
words as far as practicable.
The figures in the margin indicate full marks.
SECTION A
Attempt any TWO question.
1. How DMA operation is performed? Consider a disk with 200
tracks and the queue has random requests from different processes in the order
: 45, 48, 29, 17, 80, 150, 28 and 188. Find the seek time using FIFO, SSTF and
SCAN. Assume the initial position of head as 100.
2. How do you distinguish between deadlock and starvation ?
Describec. Explain working mechanism of TLB.
3. Why do we need to schedule process? Find the average
waiting time and average turnaround time for the following set of processes
using FCFS, SJF, RR (Quantum = 3) and shortest remaining time next.
Process |
CPU brust time |
Arrival time |
P1 |
20 |
0 |
P2 |
25 |
15 |
P3 |
10 |
30 |
P4 |
15 |
45 |
SECTION B
Attempt any EIGHT question
4. What is system call ? Describe the transition between
different states of process.
5. Discuss about contiguous and linked list file allocation
technique.
6. Why do we need virtual memory? Describe the sructure of a
page table.
7. Illustrate the term safe and unsafe state in deadlock
prevention with scenario.
8. How lock variable is used in achieving mutual exclusion?
Describe.
9 .Why do we need hierarchical directory system? Explain
structure of disk.
10. Find the number of page fault using FIFO and LRU for the
refrence string 4, 7, 6, 1, 7, 6, 1, 2, 7, 2 with frame size 3.
11. Define working set. How does clock replacement algorithm
works?
12. Write short notes on :
- Inode
- RAID
Solutions
Past Question 2079 Solution
Section A: Long Answer Questions
1. Discuss about single level and two level directory system. Consider the following process and answer the following questions.
Single Level Directory System:
- A single-level directory system organizes all files into one directory.
- Each file in the system has a unique name, and this directory holds the names and information for all files.
- It is simple and easy to implement but has limitations such as lack of organization and potential file name collisions in a large system.
Advantages of Single Level Directory System:
- Simple to implement.
- Easy access to files since all files are in one directory.
- Small systems with few files may work efficiently with this system.
Disadvantages of Single Level Directory System:
- Lack of organization when the number of files increases.
- Limited scalability as file names might collide.
- Difficult to manage files, especially in larger systems.
Two Level Directory System:
- A two-level directory system organizes files into directories where one top-level directory (the root) contains subdirectories.
- Each subdirectory contains files. The file system can be organized hierarchically, and files can be grouped by user, type, or function.
Advantages of Two Level Directory System:
- Better organization: Files are logically grouped into subdirectories, making it easier to manage.
- Avoids name collisions: Different subdirectories can have files with the same name.
- Scalable: Easier to scale as the number of files grows.
Disadvantages of Two Level Directory System:
- Complexity: Slightly more complex than a single-level system.
- Requires additional space: There needs to be more storage for the subdirectories.
Process Allocation Table:
Process | Allocation (A, B, C, D) | Max (A, B, C, D) |
---|---|---|
P0 | 0 0 1 2 | 1 5 2 0 |
P1 | 1 0 0 0 | 1 7 5 0 |
P2 | 1 3 5 4 | 2 3 5 6 |
P3 | 0 6 3 2 | 0 6 5 2 |
P4 | 0 0 1 4 | 0 6 5 6 |
a. What is the content of the Need matrix?
The Need matrix is calculated by subtracting the Allocation matrix from the Max matrix. This gives the number of resources each process still needs to complete.
Need = Max – Allocation
Process | Need (A, B, C, D) |
---|---|
P0 | 1 5 1 0 |
P1 | 0 7 5 0 |
P2 | 1 0 0 2 |
P3 | 0 0 2 0 |
P4 | 0 6 4 2 |
b. Is the system in a safe state?
To determine if the system is in a safe state, we use the Banker’s Algorithm. The steps are:
-
Calculate the Available resources: Available = Total resources – Sum of Allocated resources
Available = (A=1, B=2, C=2, D=4) – (Sum of Allocations) = (1, 5, 9, 8) – (1+1+1+0, 0+0+3+6, 1+0+5+3, 2+0+4+2) = (1, 2, 2, 4)
-
Check if any process can be completed by comparing the Need matrix with Available resources. If a process can complete, it releases its allocated resources.
The system is in a safe state if a sequence of processes exists that can all finish without deadlock.
c. If P1 requests (0,4,2,0) can the request be granted immediately?
To check if the request can be granted, compare the requested resources to the available resources.
Request = (0, 4, 2, 0)
- Compare this request with Available = (1, 2, 2, 4)
Since the request (0, 4, 2, 0) exceeds the available resources for B (4 > 2), the request cannot be granted immediately.
2. When does race condition occur in inter-process communication?
A race condition occurs in inter-process communication when multiple processes access shared resources concurrently, and the final result depends on the order of execution. This can lead to unexpected or incorrect behavior.
Example of a race condition: If two processes are trying to update a shared variable simultaneously, and the sequence of updates is not controlled, the final value may not be what was expected.
What does busy waiting mean and how it can be handled using sleep and wakeup strategy?
- Busy waiting is when a process repeatedly checks a condition to see if it is satisfied, using CPU resources without making any progress. It wastes CPU cycles, as the process just waits and doesn’t do any meaningful work.
Solution with Sleep and Wakeup Strategy:
- Sleep: The process goes to sleep (suspended state) while it waits for a specific event or condition to be met.
- Wakeup: The process is resumed once the condition is satisfied, signaling the process that it can proceed.
This strategy improves system efficiency as it avoids constant CPU usage while waiting.
3. Define shell and system call.
- Shell: The shell is a command-line interface (CLI) that allows users to interact with the operating system. It provides a user interface to run commands, execute programs, and control the system.
- System Call: A system call is a request made by a program to the operating system’s kernel to perform a task, such as accessing hardware, file I/O, or managing processes.
Disk Scheduling Algorithms:
We have a disk with 201 cylinders numbered 0 to 200, and the disk arm starts at cylinder 95. The request queue contains: 82, 170, 43, 140, 24, 16, and 190.
FCFS (First Come First Serve):
- Order of access: 95 → 82 → 170 → 43 → 140 → 24 → 16 → 190
- Seek Time: |95-82| + |82-170| + |170-43| + |43-140| + |140-24| + |24-16| + |16-190| = 13 + 88 + 127 + 97 + 116 + 8 + 174 = 623
SSTF (Shortest Seek Time First):
- Order of access: 95 → 82 → 43 → 24 → 16 → 140 → 170 → 190
- Seek Time: |95-82| + |82-43| + |43-24| + |24-16| + |16-140| + |140-170| + |170-190| = 13 + 39 + 19 + 8 + 124 + 30 + 20 = 253
SCAN:
- In SCAN, the disk arm moves in one direction and serves requests along the way. We assume it moves towards 0, serving requests along the way.
- Order of access: 95 → 82 → 43 → 24 → 16 → 0 → 140 → 170 → 190
- Seek Time: |95-82| + |82-43| + |43-24| + |24-16| + |16-0| + |0-140| + |140-170| + |170-190| = 13 + 39 + 19 + 8 + 16 + 140 + 30 + 20 = 285
C-SCAN (Circular SCAN):
- In C-SCAN, the disk arm moves in one direction, and when it reaches the end, it jumps to the beginning.
- Order of access: 95 → 140 → 170 → 190 → 200 → 16 → 24 → 43 → 82
- Seek Time: |95-140| + |140-170| + |170-190| + |190-200| + |200-16| + |16-24| + |24-43| + |43-82| = 45 + 30 + 20 + 10 + 184 + 8 + 19 + 39 = 355
Section B: Short Answer Questions
4. Distinguish between starvation and deadlock. How does the system schedule process using multiple queues?
Starvation occurs when a process is perpetually denied the resources it needs to complete its task. This can happen if higher-priority processes continually consume the resources.
Deadlock occurs when two or more processes are stuck in a state where each process is waiting for another to release resources, and none of them can proceed.
Multiple Queue Scheduling: In a multi-level queue scheduling system, processes are categorized into different queues based on priority or other criteria. Each queue has its own scheduling algorithm. The system allocates resources based on the queue’s priority.
5. List any two demerits of disabling interrupt to achieve mutual exclusion. Describe fixed and variable partitioning.
Demerits of disabling interrupts:
- Inefficiency: Disabling interrupts for mutual exclusion causes the system to be less responsive, as other processes cannot run during the critical section.
- Risk of Deadlock: If interrupts are disabled for too long, the system may suffer from deadlock or failure to respond to external events.
Fixed Partitioning:
- In fixed partitioning, memory is divided into fixed-sized partitions at the start of the program.
- Advantages: Simple and easy to implement.
- Disadvantages: Inefficient use of memory, as partitions may not match the size of the processes.
Variable Partitioning:
- In variable partitioning, memory partitions are created dynamically based on the needs of processes.
- Advantages: Better memory utilization.
- Disadvantages: Fragmentation can occur, leading to inefficient memory usage.
6. For the following dataset, compute average waiting time for SRTN and SJF.
Process | Arrival Time | Burst Time |
---|---|---|
P0 | 0 | 7 |
P1 | 2 | 4 |
P2 | 4 | 1 |
P3 | 5 | 4 |
SRTN (Shortest Remaining Time Next):
- SRTN is a preemptive version of Shortest Job First (SJF), where the process with the smallest remaining time is executed next.
SJF (Shortest Job First):
- SJF is a non-preemptive algorithm where the process with the smallest burst time is executed next.
7. Discuss the advantages and disadvantages of implementing file system using Linked List.
Advantages:
- Dynamic File Size: The linked list structure can handle files of varying sizes without wasted space.
- Efficient Space Utilization: No need for pre-allocation of space.
Disadvantages:
- Slower Access: Random access to files is slower as each block must be accessed sequentially.
- Pointer Overhead: Each file block requires extra space for storing pointers.
8. Consider the page references 7,0,1,2,0,3,0,4,2,3,0,3,2. Find the number of page fault using OPR and FIFO, with 4 page frame.
FIFO (First-In-First-Out):
- FIFO replaces the oldest page in memory when a new page is needed.
- For the given reference string, the page faults can be calculated based on FIFO.
OPR (Optimal Page Replacement):
- The OPR algorithm replaces the page that will not be used for the longest time in the future.
- We calculate page faults for OPR as well.
9. Describe the working mechanism of DMA.
Direct Memory Access (DMA) allows peripheral devices to directly access the main memory without involving the CPU. The DMA controller manages data transfer between memory and peripherals, improving system performance by freeing the CPU from handling data transfer.
10. What is the task of the disk controller? List some drawbacks of segmentation.
Disk Controller:
- The disk controller manages the data exchange between the disk and the CPU, handling tasks like reading/writing data, controlling disk arm movements, and maintaining the disk’s health.
Drawbacks of Segmentation:
- External Fragmentation: Free space gets fragmented.
- Complex Management: Managing segments becomes complex, leading to overhead.
11. Write the structure and advantages of TLB.
TLB (Translation Lookaside Buffer):
- TLB is a cache that stores recent virtual-to-physical address translations to speed up memory access.
Structure:
- TLB consists of entries that store virtual page numbers and their corresponding physical frame numbers.
Advantages:
- Fast Memory Access: Speeds up address translation.
- Reduced Memory Latency: Reduces the need for frequent memory accesses.
12. Why do we need the concept of locality of reference? List the advantages and disadvantages of Round Robin algorithm.
Locality of Reference refers to the tendency of a program to access a small subset of its memory space frequently. This concept is important for optimizing cache usage and improving system performance.
Advantages of Round Robin Algorithm:
- Fairness: Every process gets an equal share of CPU time.
- Simple to Implement: Easy to implement and understand.
Disadvantages of Round Robin Algorithm:
- Context Switching Overhead: Frequent context switching leads to overhead.
- Inefficiency for Processes with Unequal Burst Times: Short processes may have to wait for longer ones to complete.
Past Question 2080 Solution
Section A: Long Answer Questions
1. How DMA operation is performed? Consider a disk with 200 tracks and the queue has random requests from different processes in the order: 45, 48, 29, 17, 80, 150, 28, and 188. Find the seek time using FIFO, SSTF, and SCAN. Assume the initial position of the head is 100.
Direct Memory Access (DMA) Operation:
DMA allows peripheral devices to communicate directly with the main memory without CPU involvement, improving data transfer speeds.
Steps in DMA Operation:
- The CPU initiates a DMA operation by informing the DMA controller.
- The DMA controller takes control of the system bus, allowing data to be transferred directly between the peripheral and memory.
- After data transfer, the DMA controller releases control of the bus, and the CPU is notified of completion.
Disk Scheduling Algorithms:
- Given:
- Number of tracks: 200 (Tracks numbered 0 to 199)
- Initial head position: 100
- Request queue: {45, 48, 29, 17, 80, 150, 28, 188}
FIFO (First Come First Serve): FIFO serves requests in the order they arrive.
-
Requests are processed in the order: 100 → 45 → 48 → 29 → 17 → 80 → 150 → 28 → 188.
-
Seek time calculation (absolute difference between current head position and requested track):
- |100 – 45| = 55
- |45 – 48| = 3
- |48 – 29| = 19
- |29 – 17| = 12
- |17 – 80| = 63
- |80 – 150| = 70
- |150 – 28| = 122
- |28 – 188| = 160
-
Total Seek Time (FIFO) = 55 + 3 + 19 + 12 + 63 + 70 + 122 + 160 = 504
SSTF (Shortest Seek Time First): SSTF selects the request with the shortest seek time.
-
Order of requests: 100 → 80 → 48 → 45 → 29 → 28 → 17 → 150 → 188.
-
Seek time calculation:
- |100 – 80| = 20
- |80 – 48| = 32
- |48 – 45| = 3
- |45 – 29| = 16
- |29 – 28| = 1
- |28 – 17| = 11
- |17 – 150| = 133
- |150 – 188| = 38
-
Total Seek Time (SSTF) = 20 + 32 + 3 + 16 + 1 + 11 + 133 + 38 = 254
SCAN (Elevator Algorithm): SCAN moves the head towards one end, serves all requests in that direction, and then reverses direction once the end is reached.
-
Order of requests: 100 → 80 → 45 → 48 → 29 → 28 → 17 → 0 → 150 → 188.
-
Seek time calculation:
- |100 – 80| = 20
- |80 – 45| = 35
- |45 – 29| = 16
- |29 – 28| = 1
- |28 – 17| = 11
- |17 – 0| = 17
- |0 – 150| = 150
- |150 – 188| = 38
-
Total Seek Time (SCAN) = 20 + 35 + 16 + 1 + 11 + 17 + 150 + 38 = 288
2. How do you distinguish between deadlock and starvation? Describe the working mechanism of TLB.
Deadlock vs. Starvation:
-
Deadlock:
- Occurs when a set of processes are blocked because each process is holding a resource and waiting for another resource held by another process.
- Conditions for Deadlock:
- Mutual Exclusion: A resource is in a non-shareable mode.
- Hold and Wait: A process holds at least one resource and is waiting for others.
- No Preemption: Resources cannot be preempted.
- Circular Wait: A circular chain of processes exists, each holding a resource and waiting for another.
- Example: Process A holds resource 1 and waits for resource 2, while process B holds resource 2 and waits for resource 1.
-
Starvation:
- Occurs when a process is perpetually denied access to resources due to other higher-priority processes preempting its execution.
- Difference: In starvation, some processes may eventually complete, while in deadlock, no process can make progress.
TLB (Translation Lookaside Buffer) Mechanism:
-
TLB is a high-speed cache used in CPU memory management to store recent virtual-to-physical memory address translations.
-
Working Mechanism:
- When a process accesses memory, it uses virtual addresses.
- The system checks if the required virtual address is in the TLB.
- If found (TLB hit), the physical address is retrieved from the TLB.
- If not found (TLB miss), the system accesses the page table and updates the TLB with the new mapping.
-
Advantages:
- Reduces the number of memory accesses to the page table.
- Increases memory access speed, improving system performance.
3. Why do we need to schedule processes? Find the average waiting time and average turnaround time for the following set of processes using FCFS, SJF, RR (Quantum = 3) and shortest remaining time next.
Given Process Table:
Process | CPU Burst Time | Arrival Time |
---|---|---|
P1 | 20 | 0 |
P2 | 25 | 15 |
P3 | 10 | 30 |
P4 | 15 | 45 |
Need for Process Scheduling:
- Efficiency: Process scheduling ensures efficient use of CPU by managing multiple processes, preventing resource wastage.
- Fairness: It ensures fairness by allocating CPU time to each process fairly.
- Optimization: Optimizes system throughput, response time, and turnaround time.
FCFS (First Come First Serve):
Process | Arrival Time | CPU Burst Time | Completion Time | Turnaround Time | Waiting Time |
---|---|---|---|---|---|
P1 | 0 | 20 | 20 | 20 | 0 |
P2 | 15 | 25 | 45 | 30 | 5 |
P3 | 30 | 10 | 40 | 10 | 0 |
P4 | 45 | 15 | 60 | 15 | 0 |
- Average Waiting Time = (0 + 5 + 0 + 0) / 4 = 1.25
- Average Turnaround Time = (20 + 30 + 10 + 15) / 4 = 18.75
SJF (Shortest Job First):
Process | Arrival Time | CPU Burst Time | Completion Time | Turnaround Time | Waiting Time |
---|---|---|---|---|---|
P1 | 0 | 20 | 20 | 20 | 0 |
P3 | 30 | 10 | 30 | 10 | 0 |
P4 | 45 | 15 | 60 | 15 | 0 |
P2 | 15 | 25 | 85 | 70 | 45 |
- Average Waiting Time = (0 + 0 + 0 + 45) / 4 = 11.25
- Average Turnaround Time = (20 + 10 + 15 + 70) / 4 = 28.75
RR (Round Robin, Quantum = 3):
Process | Arrival Time | CPU Burst Time | Completion Time | Turnaround Time | Waiting Time |
---|---|---|---|---|---|
P1 | 0 | 20 | 63 | 63 | 43 |
P2 | 15 | 25 | 70 | 55 | 30 |
P3 | 30 | 10 | 40 | 10 | 0 |
P4 | 45 | 15 | 60 | 15 | 0 |
- Average Waiting Time = (43 + 30 + 0 + 0) / 4 = 18.25
- Average Turnaround Time = (63 + 55 + 10 + 15) / 4 = 35.75
Shortest Remaining Time Next (SRTN):
Process | Arrival Time | CPU Burst Time | Completion Time | Turnaround Time | Waiting Time |
---|---|---|---|---|---|
P1 | 0 | 20 | 20 | 20 | 0 |
P3 | 30 | 10 | 30 | 10 | 0 |
P4 | 45 | 15 | 60 | 15 | 0 |
P2 | 15 | 25 | 85 | 70 | 45 |
- Average Waiting Time = (0 + 0 + 0 + 45) / 4 = 11.25
- Average Turnaround Time = (20 + 10 + 15 + 70) / 4 = 28.75
Section B: Short Answer Questions
4. What is system call? Describe the transition between different states of process.
System Call:
- A system call is a mechanism that allows a program to request services from the operating system’s kernel, such as file operations, process control, and communication.
Process States:
- New: Process is being created.
- Ready: Process is waiting for CPU time.
- Running: Process is currently being executed by the CPU.
- Waiting (Blocked): Process is waiting for some event (e.g., I/O completion).
- Terminated: Process has finished execution.
5. Discuss about contiguous and linked list file allocation technique.
Contiguous Allocation:
- Files are stored in contiguous blocks of memory. It’s easy to implement and provides fast access.
- Drawback: It suffers from fragmentation, which may lead to inefficient storage.
Linked List Allocation:
- Each file is stored in blocks scattered around the disk. Each block contains a pointer to the next block.
- Advantages: Eliminates fragmentation, but accessing files can be slower due to pointers.
6. Why do we need virtual memory? Describe the structure of a page table.
Virtual Memory:
- Virtual memory allows programs to use more memory than physically available by swapping data between RAM and disk.
- Structure of Page Table: The page table contains mappings from virtual pages to physical frames in memory.
7. Illustrate the term safe and unsafe state in deadlock prevention with scenario.
Safe State:
- A state where the system can allocate resources in such a way that all processes can eventually complete without causing a deadlock.
Unsafe State:
- A state where there is a possibility that deadlock might occur, even though it hasn’t yet occurred.
8. How lock variable is used in achieving mutual exclusion? Describe.
A lock variable is a flag used to indicate whether a process is currently holding a resource. It helps prevent multiple processes from accessing the same critical section simultaneously, thus ensuring mutual exclusion.
9. Why do we need hierarchical directory system? Explain the structure of disk.
A hierarchical directory system allows organizing files in a tree-like structure, making it easier to manage and access files.
Disk Structure:
- Disks
are divided into sectors, tracks, and cylinders. Each track consists of concentric circles, and each cylinder is a stack of tracks.
10. How do you calculate CPU burst time in preemptive and non-preemptive scheduling algorithms?
- Preemptive: In preemptive algorithms like Round Robin, CPU burst time is divided into time slices (quantums). A process can be interrupted to allow another process to execute.
- Non-preemptive: In algorithms like SJF, once a process starts executing, it runs to completion without interruption. CPU burst time is calculated based on the process’s total required execution time.
Past Question 2081 Solution
SECTION A: Long Answer Questions
1. Explain the translation of logical address into physical address using the segment table with necessary diagram. List advantages and disadvantages of segmentation.
Logical Address Translation using Segmentation:
- Segmentation is a memory management scheme that divides the program into different logical segments, such as code, data, stack, etc.
- The logical address generated by the CPU consists of two parts: segment number and offset.
- The segment table contains the base address and the length of each segment in memory.
Translation Process:
- The CPU generates a logical address in the form of (segment number, offset).
- The segment number is used to find the corresponding segment entry in the segment table.
- The base address from the segment table entry is added to the offset to get the physical address.
Example:
Let’s assume the following:
-
Segment table has the following entries:
- Segment 0: Base Address = 1000, Length = 300
- Segment 1: Base Address = 2000, Length = 500
- Segment 2: Base Address = 3000, Length = 200
-
A logical address of (Segment 1, offset 400) is provided.
-
Translation:
- Find the base address for Segment 1 from the segment table: 2000.
- Add the offset 400 to the base address: 2000 + 400 = 2400.
- The physical address is 2400.
Diagram:
| Segment Number | Base Address | Length |
|----------------|--------------|--------|
| 0 | 1000 | 300 |
| 1 | 2000 | 500 |
| 2 | 3000 | 200 |
Logical Address: (1, 400) -> Physical Address: 2000 + 400 = 2400
Advantages of Segmentation:
- Logical division: Segments represent logical units, making the program easier to manage and modularize.
- Protection: Each segment can have different protection levels (e.g., read-only, execute-only).
- Dynamic memory allocation: Segments can grow or shrink independently.
Disadvantages of Segmentation:
- External fragmentation: Over time, unused memory regions between segments can cause fragmentation.
- Complex memory management: Requires hardware and software support for segment table management.
- Overhead: Segment tables can increase memory overhead.
2. Find the seek time using SCAN, C-SCAN, Look, and C-Look disk scheduling algorithms for processing the following request queue:
35, 70, 45, 15, 65, 20, 80, 90, 75, 130.
- Disk Parameters:
- Number of tracks: 0 to 150
- Initial head position: 30
- Moving outward (towards higher track numbers).
Disk Scheduling Algorithms:
-
SCAN:
-
In SCAN, the disk arm moves in one direction and processes requests in that direction, reversing once it reaches the end of the disk.
-
Order of processing (moving outward):
- 30 → 35 → 45 → 65 → 70 → 75 → 80 → 90 → 130
- After reaching the end (150), the arm reverses direction and processes the lower request.
- 130 → 90 → 80 → 75 → 70 → 65 → 45 → 35 → 20 → 15
-
Seek time calculation:
- First movement: 30 → 35 → 45 → 65 → 70 → 75 → 80 → 90 → 130 (total: 130 – 30 = 100)
- Reversal movement: 130 → 90 → 80 → 75 → 70 → 65 → 45 → 35 → 20 → 15 (total: 130 – 15 = 115)
-
Total seek time = 100 + 115 = 215.
-
-
C-SCAN:
-
In C-SCAN, the disk arm moves in one direction (towards higher tracks), processes requests in that direction, and then jumps back to the beginning without processing anything while reversing.
-
Order of processing (moving outward):
- 30 → 35 → 45 → 65 → 70 → 75 → 80 → 90 → 130
- Jump to the beginning: 0 → 15 → 20.
-
Seek time calculation:
- First movement: 30 → 35 → 45 → 65 → 70 → 75 → 80 → 90 → 130 (total: 130 – 30 = 100)
- Jump to beginning: 0 (no seek time).
- Process remaining requests: 15 → 20 (total: 20 – 15 = 5)
-
Total seek time = 100 + 5 = 105.
-
-
Look:
-
In Look, the disk arm moves in one direction and reverses once it reaches the last request in that direction, without moving to the extreme end of the disk.
-
Order of processing (moving outward):
- 30 → 35 → 45 → 65 → 70 → 75 → 80 → 90 → 130
- Reversal movement: 130 → 90 → 80 → 75 → 70 → 65 → 45 → 35 → 20 → 15.
-
Seek time calculation:
- First movement: 30 → 35 → 45 → 65 → 70 → 75 → 80 → 90 → 130 (total: 130 – 30 = 100)
- Reversal movement: 130 → 90 → 80 → 75 → 70 → 65 → 45 → 35 → 20 → 15 (total: 130 – 15 = 115)
-
Total seek time = 100 + 115 = 215.
-
-
C-Look:
-
In C-Look, similar to C-SCAN, the disk arm moves in one direction, processes requests, and jumps back to the beginning, but it doesn’t go to the extreme end.
-
Order of processing (moving outward):
- 30 → 35 → 45 → 65 → 70 → 75 → 80 → 90 → 130
- Jump to the beginning: 15 → 20.
-
Seek time calculation:
- First movement: 30 → 35 → 45 → 65 → 70 → 75 → 80 → 90 → 130 (total: 130 – 30 = 100)
- Jump to beginning: 0 (no seek time).
- Process remaining requests: 15 → 20 (total: 20 – 15 = 5)
-
Total seek time = 100 + 5 = 105.
-
3. Explain the Sleeping Barber problem. Illustrate on how it can be solved.
Sleeping Barber Problem:
This is a classic synchronization problem involving a barbershop with one barber and several customers. The barber sleeps when there are no customers, and wakes up when a customer arrives. The customers wait in a queue if the barber is busy and leave if the shop is full.
Problem Description:
- There is a barber (who cuts hair) and several customers.
- The barber will cut hair if there is a customer, otherwise, he will sleep.
- If a customer arrives and the barber is sleeping, the customer will wake the barber.
- If the barber is busy and there is no space in the waiting room, the customer leaves.
- If the barber is busy and there is space, the customer waits.
Solution using Semaphores:
The solution requires synchronization to avoid race conditions between the barber and customers. We can use semaphores to manage the process.
- Semaphores:
mutex
(binary semaphore) for mutual exclusion when accessing the shared waiting room.waitingCustomers
(counting semaphore) to keep track of the number of waiting customers.barberChair
(binary semaphore) to allow the barber to cut the hair of a customer.
Algorithm:
-
Customer Process:
- If there is space in the waiting room:
- Wait for
mutex
. - If there is space in the waiting room, enter.
- Signal
waitingCustomers
. - Wait for
barberChair
.
- Wait for
- If the waiting room is full, leave.
- If there is space in the waiting room:
-
Barber Process:
- If there are no customers, wait for
waitingCustomers
. - Once a customer arrives, start cutting the hair.
- If there are no customers, wait for
Diagrammatic Representation:
[Barber sleeps] <-- (mutex locked) --> [Customer arrives] --> [Wakes up barber]
[Customer enters] <-- (mutex locked) --> [Customer waits] <-- [Barber cuts hair]
Conclusion:
- The Sleeping Barber problem demonstrates how to use semaphores to manage synchronization and ensure that the barber and customers interact without conflicts.
SECTION B: Short Answer Questions
4. Explain microkernels and exokernels.
Microkernels:
- Microkernels aim to minimize the amount of functionality in the kernel. They only provide basic services like communication between processes, memory management, and device management. Other services like file
systems, networking, etc., are handled by user-space programs.
- Advantages: Stability, modularity, and security.
- Disadvantages: Overhead due to communication between user space and kernel space.
Exokernels:
- Exokernels are designed to provide minimal abstractions for hardware resources and allow applications to manage these resources directly. The goal is to provide more control to the application while still maintaining security and isolation.
- Advantages: Direct access to hardware resources, high performance.
- Disadvantages: Complexity in application-level management of resources.
5. Consider a swapping system in which memory consists of the following hole sizes in memory order:
15 MB, 2 MB, 10 MB, 6 MB, 8 MB, and 20 MB. Which hole is taken for successive segment requests of: (a) 10 MB (b) 10 MB For first fit, next fit, and best fit.
First Fit:
- Allocates the first hole large enough for the request.
- (a) 10 MB → Allocated to the first 15 MB hole.
- (b) 10 MB → Allocated to the next hole (10 MB hole).
Next Fit:
- Allocates the first hole large enough starting from the last allocated hole.
- (a) 10 MB → Allocated to the first 15 MB hole.
- (b) 10 MB → Allocated to the next 10 MB hole.
Best Fit:
- Allocates the smallest hole that is large enough for the request.
- (a) 10 MB → Allocated to the 10 MB hole.
- (b) 10 MB → Allocated to the 15 MB hole.
6. Explain how semaphore solves the problem of critical section.
- A semaphore is a synchronization tool that helps prevent race conditions by controlling access to shared resources.
- It maintains a counter to control access to a critical section of code.
- Binary semaphore (mutex) allows only one process at a time to enter the critical section.
- Counting semaphore is used when a resource has a limited number of instances (e.g., a pool of resources).
Solution:
- A process needs to wait (decrease the semaphore count) before entering the critical section and signal (increase the semaphore count) after leaving it.
- This ensures that only one process accesses the shared resource at a time, preventing race conditions.
7. How do you think deadlock can be avoided? Explain.
Deadlock Prevention: To avoid deadlock, one of the necessary conditions for deadlock must be eliminated:
- Mutual Exclusion: Ensure that resources are shared, not exclusive.
- Hold and Wait: Require processes to request all resources at once.
- No Preemption: Allow resources to be preempted if a process holds them for too long.
- Circular Wait: Ensure that there is no circular chain of processes holding resources.
Techniques:
- Resource Allocation Graph: A directed graph to detect and prevent deadlock.
- Banker’s Algorithm: A safety algorithm to allocate resources safely without leading to deadlock.
8. Explain Inter-Process Communication in Linux.
Inter-Process Communication (IPC) in Linux enables processes to exchange data and synchronize their activities. IPC mechanisms include:
- Pipes: Allows data to be transferred from one process to another.
- Message Queues: Allows processes to send and receive messages.
- Shared Memory: Processes share a region of memory to communicate.
- Semaphores: Used for synchronization between processes.
IPC ensures that processes do not interfere with each other and allows for efficient data sharing.
9. List different file structures and explain them.
-
Sequential File Structure: Data is stored in sequence, one record after another.
- Advantages: Simple and efficient for reading all records in order.
- Disadvantages: Not efficient for searching, inserting, or deleting.
-
Indexed File Structure: An index is maintained for faster access to records.
- Advantages: Faster searching.
- Disadvantages: Additional memory overhead for the index.
-
Hashed File Structure: A hash function is used to map records to locations in the file.
- Advantages: Very fast access for searching.
- Disadvantages: Collisions and storage overflow may occur.
10. Calculate the average waiting time and turnaround time using priority algorithm (Priority 1 being the highest) for the given scenario:
Processes:
Process | CPU Burst Time | Arrival Time | Priority |
---|---|---|---|
A | 3 | 0 | 3 |
B | 2 | 2 | 3 |
C | 4 | 3 | 2 |
D | 2 | 3 | 1 |
Solution:
- Execution Order based on priority:
- Priority 1: D
- Priority 2: C
- Priority 3: A, B
Gantt Chart:
Time | 0 | 2 | 4 | 6 | 8 | 10 |
---|---|---|---|---|---|---|
P | A | B | D | C |
- Waiting Time Calculation:
- Waiting time = Turnaround time – CPU burst time.
- Average waiting time: (Waiting time for A + B + C + D) / 4
Average Waiting Time = X (Calculation based on time diagram).