Warning:
JavaScript is turned OFF. None of the links on this page will work until it is reactivated.
If you need help turning JavaScript On, click here.
This Concept Map, created with IHMC CmapTools, has information related to: Andrew FS & Enhancements, Advances include Improvements in storage organization: redundant arrays of inexpensive disks(RAID)- data blocks are segmented into fixed size chunks and stored in stripes across several disks, along with redundant error correcting codes that enable the data blocks to be reconstructed completely and operation to continue normally in case of disk failures log structured file storage(LFS)- like spritely NFS, this technique originated in the sprite distributed OS project at Berkeley, Advances include NFS enhancements: updated one copy update semantics by extending the NFS protol to include open and close operations and adding a callback mechanism to enable the server to notify clients of the need to invalidate cache entries, achieving one copy update semantics- the stateless server architecture of NFS brought great advantages in robustness and ease of implementation for NFS, but it precluded the achievement of precise one copy update semantics NQNFS- had a similar aims to spritely NFS- to add more precise cache consistency to the NFS protocol and to improve performance through better use of caching WebNFS- the advent of the web and java applets led to the recognition by the NFS development team and others that some internet apps could benefit from direct access to NFS servers without many of the overheads associated with the emulation of UNIX file operations included in standard NFS clients NFS version 4- aims to make practical to use NFS in wide area networks and internet apps, Andrew File System: whole file serving- the entire contents of directories and files are transmitted to client computers by AFS servers whole file caching- one a copy of a file or a chunk has been transferred to a client computer it is stored in a cache on the local disk has other aspects: UNIX kernel modifications- we have noted that the Vice server is a user level process running in the server computer and the server host is dedicated to the provision of an AFS service, the UNIX kernel in AFS hosts is altered so that Vice can perform file operations in terms of file handles instead of the conventional UNIX file descriptors lacation databas- each server contains a copy of a fully replicated location database giving a mapping of volume names to servers. threads- the implementations of Vice and Venus make use of a non pre emptive threads package to enable requests to be processed concurrently at both the client and at the server read only replicas- volumes containing files that are read alot but rarely modified can be replicated as read only volumes at several servers bulk transfers- AFS transfers files between clients and servers in 64 bit kb chunks partial file caching- the need to transfer the entire contents of files to clients even when teh app requirement is to read only a small portion of the file is an obvious source of inefficiency, versions 3 of AFS removed this requirement, allowing file data to be transferred and cached in 64 kb blocks while still retaining the consistency semantics and other features of the AFS protocl performance- the primary goal of AFS is scalability, so its performance with large numbers of users is of particular interest, whole file caching and the callback protocol led to dramatically reduced loads on the servers wide area support- version 3 of AFS supports multiple administrative cells, each with its own servers, clients, system administrators and user, a federation of cells can cooperate in presenting users with a uniform, seamless file name space, Advances include new design approaches: availability of high performance networks, ATM, have paved the way to provide persistant storge systems XFS- a serverless network file system architecture motivated by: 1 opportunity provided by fast switched LANs for multiple file servers in a local network to transfer bulk data to clients concurrently 2 increased demand for access to shared data 3 the fundamental limitations of systems based on central file servers frangipani: is a highly scalable distributed file system developed with a design that separates persistent storage responsibilites from other file service actions, Andrew File System: whole file serving- the entire contents of directories and files are transmitted to client computers by AFS servers whole file caching- one a copy of a file or a chunk has been transferred to a client computer it is stored in a cache on the local disk has implementation: AFS is implemented as two software components that exist as UNIX processes called Vice and Venus. AFS resembles the abstract file service model in these aspects: 1 a flat file service is implemented by the Vice servers, and the hierarchic directory structure required by UNIX user programs is implemented by the set of Venus processes in the workstations 2 each file and directory in the shared file space is identified by a unique, 96 bit identifier (fid) similar to a UFID, the Venus processes translate the pathnames issued by clients to fids, Advances include AFS enhancements: DCE/DFS was based on AFS, but goes beyond in its approach to cache consistency, Andrew File System: whole file serving- the entire contents of directories and files are transmitted to client computers by AFS servers whole file caching- one a copy of a file or a chunk has been transferred to a client computer it is stored in a cache on the local disk has cache consistency: when Vice supplies a copy of a file to a Venus process it also provides a callback promise - a token issued by the Vice server that is the custodian of the file update semantics: the goal of this cache consistency mechanism is to achieve the best approximation to one copy file semantics that is practicable without serious performance degradation