I MPLEMENTING FILES. Contiguous Allocation: The simplest allocation scheme is to store each file as...
Transcript of I MPLEMENTING FILES. Contiguous Allocation: The simplest allocation scheme is to store each file as...
IMPLEMENTING FILES
Contiguous Allocation: The simplest allocation scheme is to store each file as a
contiguous run of disk blocks (a 50-KB file would be allocated
as 50 consecutive blocks).
Advantages:
Simple to implement (keeping track of where a file’s stored).
Read performance is excellent (because the entire file can be
read from the disk in a single operation).
Drawback:
The disk becomes fragmented.
Linked List Allocation: The second method for storing files is to keep each one as a
linked list of disk blocks
Drawback:
Random access is extremely slow & some amount of
memory is wasted in storing pointers..
The disadvantage of linked list allocation can be eliminated by taking the pointer word from each disk block and putting it in a table or index in memory.
Disadvantage:
Entire table must be in memory all the time to work with it.
Linked List Using Index:
Method to keep track of which blocks belong to which file is to associate with each file a little table called i-node.It list the attributes and disk address of the file’s block.
I-nodes:
Implementing Directories:
Files on a UNIX system are organized into groups called
directories; which contains files and other directories.
Directories are arranged into a hierarchy, which is tree of a
file system.
File Sharing: In a multiuser system there is always a need to share a file
among a number of users.
File sharing determines the manner in which authorized users
of a file may share a file and the manner in which the results
of their file manipulation is visible to one another.
Two kinds of file sharing mode:
Sequential sharing
Concurrent sharing
In Sequential sharing, files which are shared can be
accessed by only one program at any point in time.
In Concurrent sharing, it’s essential to avoid mutual
interface between various users of shared file.
It’s also required to change access rights according to
requirement.
Files are normally stored on disk, so management of disk
space is a major concern to file system designers.
When a file is deleted, its disk space is added to the free
space list. This can be implemented in may ways:
Bit vector
Linked List
Grouping
Counting
1.Bit Vector:
Free space list is implemented as a bit vector.
Each block is represented by 1 bit. If block is free the bit is 1;
if block is allocated the bit is 0.
Disk Space Management:
2. Linked List:
Here, all the free blocks are linked keeping a pointer to the
first free block in a special location on a disk.
It just needs space for the pointer to the beginning of the
chain and the length of the first portion.
3. Grouping:
It stores the address of n free blocks in the first free block.
The last block contains the addresses of another n free
blocks.
This last block is often called as index block.
4. Counting:
Every entry in the free list contains the address of the first
free block and the number of consecutive free blocks.
Helps in fast allocation of free blocks.
File System Reliability:File system reliability concerns the ability of a file system to
function correctly despite occasional faults in the system.
There are 2 aspects of system reliability:
Ensuring correctness of the file creation, deletion & updates.
Preventing loss of data in files.
File System Reliability Techniques:
1. Recovery:
It used when failure is noticed.
Restores the data to its original state.
2. Fault Tolerance:
Used to prevent against the loss of integrity of the file
system when the fault occurs.
File System Performance: There are many techniques used to ensure good performance
of a file system.
Most of them are concerned with faster access of data.
Hash tables and B+ trees are used to make directory searches
more efficient.
The techniques of caching & buffering are used to speed up
the accesses to file data.
Caching is most basic technique for speeding up access to the
information.
It involves keeping the most accessed data item in the
memory to speed-up repeated accesses to the data item.
Buffering loads the information in memory in advance of
future references. (for example, directories are cached in the
memory when accessed for the first time. Thus, a directory
used to decide a pathname is kept in the memory cache to
speed-up future references to files located in it.)