Warning:
JavaScript is turned OFF. None of the links on this page will work until it is reactivated.
If you need help turning JavaScript On, click here.
This Concept Map, created with IHMC CmapTools, has information related to: ch8 dist andr, Andrew File System: whole file serving- the entire contents of directories and files are transmitted to client computers by AFS servers whole file caching- one a copy of a file or a chunk has been transferred to a client computer it is stored in a cache on the local disk has other aspects: UNIX kernel modifications- we have noted that the Vice server is a user level process running in the server computer and the server host is dedicated to the provision of an AFS service, the UNIX kernel in AFS hosts is altered so that Vice can perform file operations in terms of file handles instead of the conventional UNIX file descriptors lacation databas- each server contains a copy of a fully replicated location database giving a mapping of volume names to servers. threads- the implementations of Vice and Venus make use of a non pre emptive threads package to enable requests to be processed concurrently at both the client and at the server read only replicas- volumes containing files that are read alot but rarely modified can be replicated as read only volumes at several servers bulk transfers- AFS transfers files between clients and servers in 64 bit kb chunks partial file caching- the need to transfer the entire contents of files to clients even when teh app requirement is to read only a small portion of the file is an obvious source of inefficiency, versions 3 of AFS removed this requirement, allowing file data to be transferred and cached in 64 kb blocks while still retaining the consistency semantics and other features of the AFS protocl performance- the primary goal of AFS is scalability, so its performance with large numbers of users is of particular interest, whole file caching and the callback protocol led to dramatically reduced loads on the servers wide area support- version 3 of AFS supports multiple administrative cells, each with its own servers, clients, system administrators and user, a federation of cells can cooperate in presenting users with a uniform, seamless file name space, Andrew File System: whole file serving- the entire contents of directories and files are transmitted to client computers by AFS servers whole file caching- one a copy of a file or a chunk has been transferred to a client computer it is stored in a cache on the local disk has cache consistency: when Vice supplies a copy of a file to a Venus process it also provides a callback promise - a token issued by the Vice server that is the custodian of the file update semantics: the goal of this cache consistency mechanism is to achieve the best approximation to one copy file semantics that is practicable without serious performance degradation, Andrew File System: whole file serving- the entire contents of directories and files are transmitted to client computers by AFS servers whole file caching- one a copy of a file or a chunk has been transferred to a client computer it is stored in a cache on the local disk has implementation: AFS is implemented as two software components that exist as UNIX processes called Vice and Venus. AFS resembles the abstract file service model in these aspects: 1 a flat file service is implemented by the Vice servers, and the hierarchic directory structure required by UNIX user programs is implemented by the set of Venus processes in the workstations 2 each file and directory in the shared file space is identified by a unique, 96 bit identifier (fid) similar to a UFID, the Venus processes translate the pathnames issued by clients to fids