Warning:
JavaScript is turned OFF. None of the links on this page will work until it is reactivated.
If you need help turning JavaScript On, click here.
This Concept Map, created with IHMC CmapTools, has information related to: ch14 dist dead, distributed deadlocks Deadlock detection - local wait-for graphs Local wait-for graphs can be built, e.g. server Y: U V added when U requests b.withdraw(30) server Z: V W added when V requests c.withdraw(20) server X: W U added when W requests a.withdraw(20) to find a global cycle, communication between the servers is needed centralized deadlock detection one server takes on role of global deadlock detector the other servers send it their local graphs from time to time it detects deadlocks, makes decisions about which transactions to abort and informs the other servers usual problems of a centralized service - poor availability, lack of fault tolerance and no ability to scale, distributed deadlocks Local and global wait-for graphs Phantom deadlocks a ‘deadlock’ that is detected, but is not really one happens when there appears to be a cycle, but one of the transactions has released a lock, due to time lags in distributing graphs in the figure suppose U releases the object at X then waits for V at Y and the global detector gets Y’s graph before X’s (T U V T), distributed deadlocks ???? Single server transactions can experience deadlocks prevent or detect and resolve use of timeouts is clumsy, detection is preferable. it uses wait-for graphs. Distributed transactions lead to distributed deadlocks in theory can construct global wait-for graph from local ones a cycle in a global wait-for graph that is not in local ones is a distributed deadlock