Concurrency Solutions and Deadlock CS 111 Operating Systems Peter Reiher
Distributed Transaction Management. Outline Introduction Concurrency Control Protocols Locking ...
-
Upload
janice-robertson -
Category
Documents
-
view
223 -
download
2
Transcript of Distributed Transaction Management. Outline Introduction Concurrency Control Protocols Locking ...
Distributed Transaction ManagementDistributed Transaction Management
OutlineOutline
Introduction
Concurrency Control Protocols Locking
Timestamping
Deadlock Handling
Replication
IntroductionIntroduction
DBMS Transaction SubsystemDBMS Transaction Subsystem
Transactionmanager
Scheduler
Recoverymanager
Buffermanager
Systemsmanager
Accessmanager
Filemanager
Transaction manager coordinates transactions on behalf of application programs
It communicates with the scheduler
• The scheduler handles concurrency control.
• Its objective is to maximize concurrency without allowing simultaneous transactions to interfere with one another
• The recovery manager ensures that the database is restored to the right state before a failure occurred.
• The buffer manager is responsible for the transfer of data between disk storage and main memory.
DB
Distributed Transactions Distributed Transactions Management SystemManagement System
Each site has a local transaction manager responsible for: Maintaining a log for
recovery purposes Participating in
coordinating the concurrent execution of the transactions executing at that site.
Data Communications
Coordinator
Local TM
recoveryrecovery
concurrencyconcurrency
Each site has a transaction coordinator, which is responsible for: Starting the execution of
transactions that originate at the site.
Distributing subtransactions at appropriate sites for execution.
Coordinating the termination of each transaction that originates at the site, which may result in the transaction being committed at all sites or aborted at all sites.
Data Communications
Coordinator
Local TM
Distributed Transactions Distributed Transactions Management SystemManagement System
Distribute
Subtransactions of T
Distribute
Subtransactions of T
Start TStart T
Coordinating transactionCoordinating transaction
Each site has a data communications module that handle inter-site communications
TMs at each site do not communicate with each other directly
Data Communications
Coordinator
Local TM
Distributed Transactions Management Distributed Transactions Management SystemSystem
Transaction System ArchitectureTransaction System Architecture
Coordination of Distributed TransactionCoordination of Distributed Transaction
Data Communications
Data Communications
Data Communications
Coordinator
Local TM
Coordinator
Local TM
Coordinator
Local TM
S1 S2S3
Coordination of Distributed TransactionCoordination of Distributed Transaction
Local TM Local TM Local TM
S1 S2S3
In a DDBMS, the local transaction Manager (TM ) still exists in each local DBMS
In a DDBMS, the local transaction Manager (TM ) still exists in each local DBMS
Coordination of Distributed TransactionCoordination of Distributed Transaction
Coordinator
Local TM
Coordinator)
Local TM
Coordinator
Local TM
S1 S2S3
There is also a global transaction manager or transaction coordinator at each site
There is also a global transaction manager or transaction coordinator at each site
Coordination of Distributed TransactionCoordination of Distributed Transaction
Data Communications
Data Communications
Data Communications
Coordinator
Local TM
Coordinator (TC1)
Local TM
Coordinator
Local TM
S1 S2S3
Inter-site communication is handled by Data Communications component at each site
Inter-site communication is handled by Data Communications component at each site
Fragmented schema: S1, S2, S21, S22 , S23 ,
a transaction T that prints out the names of all staff;
Subtransactions: Ts3 : at site 3
Ts5 : at site 5
Ts7 : at site 7 S1: Sno, position,sex, dob, salary,nin (Staff) site5
S2: Sno, fname, lname,address,telno,bno (Staff)
S21: bno=‘B3’ (S2) site 3
S22: bno=‘B5’ (S2) site 5
S22: bno=‘B7’ (S2) site 7
S1: Sno, position,sex, dob, salary,nin (Staff) site5
S2: Sno, fname, lname,address,telno,bno (Staff)
S21: bno=‘B3’ (S2) site 3
S22: bno=‘B5’ (S2) site 5
S22: bno=‘B7’ (S2) site 7
Procedure: Coordination of Distributed Procedure: Coordination of Distributed TransactionTransaction
Coordinator (TC1)T
Procedure: Coordination of Distributed Procedure: Coordination of Distributed TransactionTransaction
Data Communications
Data Communications
Data Communications
Coordinator
Local TM Local TM
Coordinator
Local TM
S3 S7S5
TC3 at site S3 divides the transaction into a number of subtransactions
TC3 at site S3 divides the transaction into a number of subtransactions
Ts3 : at site 3Ts3 : at site 3
Ts5 : at site 5Ts5 : at site 5
Ts7 : at site 7Ts7 : at site 7
Procedure: Coordination of Distributed Procedure: Coordination of Distributed TransactionTransaction
Data Communications
Data Communications
Data Communications
Coordinator
Local TM
Coordinator (TC1)
Local TM
Coordinator
Local TM
S3 S7S5
DM component at site S3 sends the subtransactions to the appropriate sites
DM component at site S3 sends the subtransactions to the appropriate sites
Ts3 : at site 3Ts3 : at site 3
Ts5 : at site 5Ts5 : at site 5 Ts7 : at site 7Ts7 : at site 7
T
Procedure: Coordination of Distributed Procedure: Coordination of Distributed TransactionTransaction
Data Communications
Data Communications
Data Communications
Coordinator
Local TM
Coordinator (TC1)
Local TM
Coordinator
Local TM
S3 S7S5
The Transaction Managers at the affected sites (S5 and S7) process the subtransactions
The Transaction Managers at the affected sites (S5 and S7) process the subtransactions
Ts3 : at site 3Ts3 : at site 3Ts5 : at site 5Ts5 : at site 5 Ts7 : at site 7Ts7 : at site 7
Procedure: Coordination of Distributed Procedure: Coordination of Distributed TransactionTransaction
Data Communications
Data Communications
Data Communications
Coordinator
Local TM
Coordinator (TC3)
Local TM
Coordinator
Local TM
S3 S7S5
The results of the subtransactions are communicated back to TC3 at site 3 via the DM components
The results of the subtransactions are communicated back to TC3 at site 3 via the DM components
Result Ts3 : at site 3Result Ts3 : at site 3
Result Ts5 : at site 5Result Ts5 : at site 5
Result Ts7 : at site 7Result Ts7 : at site 7Result Ts5 : at site 5Result Ts5 : at site 5
Result Ts7 : at site 7Result Ts7 : at site 7
Result Ts5 : at site 5Result Ts5 : at site 5 Result Ts7 : at site 7Result Ts7 : at site 7
Concurrency Control in Distributed Concurrency Control in Distributed Data BasesData Bases
Problems in Concurrent Use of DDBMSProblems in Concurrent Use of DDBMS
Lost update
Uncommitted dependency
Inconsistent analysis
Multiple-copy consistency problem
Concurrency Control in DDBConcurrency Control in DDB
Modify concurrency control schemes ( locking and timestamping) for use in distributed environment.
We assume that each site participates in the execution of a commit protocol to ensure global transaction atomicity.
We assume (for now) all replicas of any item are updated.
ComputerNetwork
DB DB
DB
Site 1
Site 4
Site 2
Site 3
Transaction transparency: Failure transparencyTransaction transparency: Failure transparency
Example: Given a global transaction that has to update data at two sites, S1 and S2
Subtransaction at S1 completes successfully and COMMIT
Subtransaction at S2 is unable to commit and rolls back the changes to ensure local consistency
Problem:
The distributed database is now in an inconsistent state.
We are unable to uncommit the data at site S1 due to the durability of the subtransaction at S1
Concurrency Control in DDBMSConcurrency Control in DDBMS
Approaches : Locking Protocols
Single-Lock Manager (2PL) Distributed Lock Manager (2PL) Primary Copy (2PL) Majority Protocol
Locking ProtocolsLocking Protocols
ComputerNetwork
DB
DB
Site 1
Site 2
Site 3
DB
System maintains a single lock manager that resides in a single chosen site, say Si
System maintains a single lock manager that resides in a single chosen site, say Si
Single-Lock-Manager (2PL)Single-Lock-Manager (2PL)
Lock Manager
ComputerNetwork
DB
DB
Site 1
Site 2
Site 3
DB
T1T2T3
Single-Lock-ManagerSingle-Lock-ManagerTransaction Coordinator at site S1 divides the transactions
Transaction Coordinator at site S1 divides the transactions
T
Transaction Coordinator at site S1 acts as global transaction manager or transaction coordinator
Transaction Coordinator at site S1 acts as global transaction manager or transaction coordinator
ComputerNetwork
DB
DB
Site 1
Site 2
Site 3
DB
T1
T2
T3
Single-Lock-ManagerSingle-Lock-Manager
DM component at site S1 sends the subtransactions to the appropriate sites
DM component at site S1 sends the subtransactions to the appropriate sites
T
All lock and unlock requests are made at site Si
When a transaction needs to lock a data item, it sends a lock request to Si and lock manager determines whether the lock can be granted immediately:
If yes, lock manager sends a message to the site which initiated the request.If no, request is delayed until it can be granted, at which time a message is sent to the initiating site.
All lock and unlock requests are made at site Si
When a transaction needs to lock a data item, it sends a lock request to Si and lock manager determines whether the lock can be granted immediately:
If yes, lock manager sends a message to the site which initiated the request.If no, request is delayed until it can be granted, at which time a message is sent to the initiating site.
ComputerNetwork
DB
DB
Site 1
Site 2
Site 3
DB
T1
T2
T3
Single-Lock-ManagerSingle-Lock-ManagerMay I lock X?
OK
ComputerNetwork
DB
DB
Site 1
Site 2
Site 3
DB
If the transaction involves an update of the data item that is replicated, the coordinator must ensure that all copies of the data item are updated
If the transaction involves an update of the data item that is replicated, the coordinator must ensure that all copies of the data item are updated
copy1
copy2
copy3
The coordinator requests write locks on all copies before updating each copy and releasing the locksThe coordinator can elect to use any copy of the item for reads, generally the copy at its site, if one exists
The coordinator requests write locks on all copies before updating each copy and releasing the locksThe coordinator can elect to use any copy of the item for reads, generally the copy at its site, if one exists
Single-Lock-ManagerSingle-Lock-Manager
T
I want to update xPls lock
all copies
lockedlocked
lockedlocked
lockedlocked
all copies locked
Single-Lock-Manager ApproachSingle-Lock-Manager Approach
Advantages of scheme: Simple implementation
Simple deadlock handling
Disadvantages of scheme are: Bottleneck: lock manager site becomes a bottleneck
Vulnerability: system is vulnerable to lock manager site failure.
ComputerNetwork
DB
DB
Site 1
Site 2
Site 3
DB
Choose one replica of data item to be the primary copy.
Choose one replica of data item to be the primary copy.
Tcopy1
copy2
copy3
Site containing the replica is called the primary site for that data item.
Other copies are called slave copies
Site containing the replica is called the primary site for that data item.
Other copies are called slave copies
Primary Copy (2PL)Primary Copy (2PL)
Primary copyPrimary copy
SlaveSlave
SlaveSlave
ComputerNetwork
DB
DB
Site 1
Site 2
Site 3
DB
Tcopy1
copy2
copy3
Different data items can have different primary sites.
Different data items can have different primary sites.
Primary Copy (2PL)Primary Copy (2PL)
X
Y
Z
ComputerNetwork
DB
DB
Site 1
Site 2
Site 3
DB
When a transaction needs to lock a data item Q (update), it requests a lock at the primary site of Q.
The response to the request is delayed until it can be granted.
Implicitly gets lock on all replicas of the data item.
When a transaction needs to lock a data item Q (update), it requests a lock at the primary site of Q.
The response to the request is delayed until it can be granted.
Implicitly gets lock on all replicas of the data item.
Tcopy1
copy2
copy3
Primary Copy (2PL)Primary Copy (2PL)
Primary copy
I want to update Q
Q
ComputerNetwork
DB
DB
Site 1
Site 2
Site 3
DB
Once the primary copy has been updated the change can be propagated to the slave copies
The propagation should be carried out ASAP to prevent other transactions from reading out-of-date values.
Once the primary copy has been updated the change can be propagated to the slave copies
The propagation should be carried out ASAP to prevent other transactions from reading out-of-date values.
Tcopy1
copy2
copy3
Primary Copy (2PL)Primary Copy (2PL)
Primary copy
Q
Primary Copy (2PL)Primary Copy (2PL)
Benefit Can be used when data is selectively replicated,
updates are infrequent and sites do not always need the very latest version of the data
Drawback If the primary site of Q fails, Q is inaccessible even
though other sites containing a replica may be accessible.
ComputerNetwork
DB
DB
Site 1
Site 2
Site 3
DB
lock managers are assigned at each site
Lock managers control access to local data items.
lock managers are assigned at each site
Lock managers control access to local data items.
T
Distributed Lock Manager (2PL)Distributed Lock Manager (2PL)
Primary copy
ComputerNetwork
DB
DB
Site 1
Site 2
Site 3
DB
When a transaction wishes to lock a data item Q, which is not replicated and resides at site Si, a message is sent to the lock manager at site Si, requesting a lock
If Q is locked in an incompatible mode, the request is delayed until it can be granted.
Once the lock request is granted, the lock manager sends a message back to the initiator indicating that it has granted the lock request.
When a transaction wishes to lock a data item Q, which is not replicated and resides at site Si, a message is sent to the lock manager at site Si, requesting a lock
If Q is locked in an incompatible mode, the request is delayed until it can be granted.
Once the lock request is granted, the lock manager sends a message back to the initiator indicating that it has granted the lock request.
T
Distributed Lock Manager (2PL)Distributed Lock Manager (2PL)I want to update Q
Q
OK lockedlocked
ComputerNetwork
DB
DB
Site 1
Site 2
Site 3
DB
T copy1
copy2
copy3
Distributed Lock Manager (2PL)Distributed Lock Manager (2PL)
When a transaction wishes to lock a data item Q, which is replicated, Read-One-Write-All is implemented
Any copy can be used for reading a data item
When a transaction wishes to lock a data item Q, which is replicated, Read-One-Write-All is implemented
Any copy can be used for reading a data item
Q
Q
Q
I want to read Q
Ok
Read-lockedRead-locked
ComputerNetwork
DB
DB
Site 1
Site 2
Site 3
DB
T copy1
copy2
copy3
Distributed Lock Manager (2PL)Distributed Lock Manager (2PL)
Q
Q
Q
I want to update Q
Ok
write-lockedwrite-locked
All copies must be write-locked before an item can be updated
All copies must be write-locked before an item can be updated
write-lockedwrite-lockedwrite-lockedwrite-locked
Ok
Ok
Distributed Lock Manager (2PL)Distributed Lock Manager (2PL)
Advantage: work is distributed and can be made robust to
failures
Disadvantage: deadlock detection is more complicated due to
multiple lock managers
Majority Protocol (ext 2PL)Majority Protocol (ext 2PL) Local lock manager at each site
administers lock and unlock requests for data items stored at that site.
When a transaction wishes to lock an unreplicated data item Q residing at site Si, a message is sent to Si ‘s lock manager.
If Q is locked in an incompatible mode, then the request is delayed until it can be granted.
When the lock request can be granted, the lock manager sends a message back to the initiator indicating that the lock request has been granted.
ComputerNetwork
DB
DB
Site 1
Site 2
Site 3
DB
T
I want to update Q
Q
Okwrite-lockedwrite-locked
Majority ProtocolMajority Protocol In case of replicated data
If Q is replicated at n sites, then a lock request message must be sent to at least more than half of the n sites in which Q is stored. (n/2 + 1)
The transaction does not operate on Q until it has obtained a lock on a majority of the replicas of Q. (n/2 + 1)
When writing the data item, transaction performs writes on all replicas.
ComputerNetwork
DB
DB
Site 1
Site 2
Site 3
DB
copy1
copy2
copy3
TQ
Q
Q
I want to update Q
OK
OK
No
Majority ProtocolMajority Protocol
Benefit
Can be used even when some sites are unavailable
Majority ProtocolMajority Protocol
Drawback There is a potential for deadlock even with single
item Example : system with four sites and full replication.
T1 and T2 wish to lock data item Q in exclusive mode.
T1 succeeds locking Q in sites S1 and S3 while T2 succeeds locking Q in sites S2 and S4
T1 and T2 must wait to acquire the third lock, hence a deadlock has occurred
TimestampingTimestamping
TimestampingTimestamping
Timestamp based concurrency-control protocols can be used in distributed systems
Each transaction must be given a unique timestamp
Methods for generating unique timestamps
Centralized scheme –
Distributed scheme
ComputerNetwork
DB DB
DB
Site 1
Site 4
Site 2
Site 3
Centralized Scheme
a single site distributes the timestamps using a logical counter or its own clock
T1
T2
T3
Central timestamping site
TS=1TS=1
TS=2TS=2
TS=3TS=3
ComputerNetwork
DB DB
DB
Site 1
Site 4
Site 2
Site 3
Each site generates a unique local timestamp using either a logical counter or the local clock.
T4
T3
T2TS=1TS=1
TS=1TS=1
TS=1TS=1
TS=1TS=1T1
LC=1
LC=1
LC=1
LC=1
Distributed Scheme TimestampingDistributed Scheme Timestamping
Distributed Scheme TimestampingDistributed Scheme Timestamping Global unique timestamp is obtained by
concatenating the unique local timestamp with the unique site identifier.
The order of concatenation (site identifier in the least significant position) is important: to ensure that timestamps generated in one site are not always greater than those generated in other sites
1 20
1 20
TimestampingTimestamping
Problems occur when sites have a different rate of generating timestamps A site with a slow clock will assign smaller timestamps
Still logically correct: serializability not affected
But: “disadvantages” the transactions
To fix this problem Define within each site Si a logical clock (LCi), which
generates the unique local timestamp
Require that Si advance its logical clock whenever a request is received from a transaction Ti with timestamp < x,y> and x is greater that the current value of LCi.
In this case, site Si advances its logical clock to the value x + 1.
ComputerNetwork
DB DB
DB
Site 1
Site 4
Site 2
Site 3
T4
T3
T2TS=4TS=4
TS=2TS=2
TS=3TS=3
TS=5TS=5T1
LC=5
LC=4
LC=2
LC=3
T1aT1b
T1a
Distributed Scheme TimestampingDistributed Scheme Timestamping
T1b
LC=6
LC=6T1a=6T1a=6
T1b=6T1b=6
Deadlock HandlingDeadlock Handling
Consider the following two transactions and history, with
item X and transaction T1 at site 1, and item Y and
transaction T2 at site 2:
Deadlock HandlingDeadlock Handling
ComputerNetwork
DB
Site 1Site 2
DBxy
T1 T2
T1: write (X)write (Y)
T2: write (Y)write (X)
Deadlock HandlingDeadlock Handling
ComputerNetwork
DB
Site 1Site 2
DBxy
T1 T2
T1: write (X)write (Y)
T2: write (Y)write (X)
write-lock on Xwrite (X)
write-lock on Ywrite (Y)
write-lock on Ywrite(Y)
write-lock on Xwrite (X)
No wfg at Site 1 No wfg at Site 2
Deadlock HandlingDeadlock Handling
write-lock on Xwrite (X)
write-lock on Ywrite (Y)
write-lock on Ywrite(Y)
write-lock on Xwrite (X)
When a transaction T1 on site S1 needs a resource on site S2, it sends a request to site S2
If the resource is held by transaction T2 (at site 2) the system inserts an edge T1 T2 in the local wfg in site S2.
When a transaction T1 on site S1 needs a resource on site S2, it sends a request to site S2
If the resource is held by transaction T2 (at site 2) the system inserts an edge T1 T2 in the local wfg in site S2.
ComputerNetwork
DB
Site 1Site 2
DBxy
T1 T2
Deadlock HandlingDeadlock HandlingT1: write (X)
write (Y)T2: write (Y)
write (X)
write-lock on Xwrite (X)
write-lock on Ywrite (Y)
write-lock on Yrrite(Y)
write-lock on Xwrite (X)
T1 on site S1 needs a resource (y) on site S2, it sends a request to site S2
Since the resource is held by transaction T2 the system inserts an edge T1 T2 in the local wfg in site S2.
T1 on site S1 needs a resource (y) on site S2, it sends a request to site S2
Since the resource is held by transaction T2 the system inserts an edge T1 T2 in the local wfg in site S2.
T1 T2
Deadlock HandlingDeadlock HandlingT1: write (X)
write (Y)T2: write (Y)
write (X)
write-lock on Xwrite (X)
write-lock on Ywrite (Y)
write-lock on Yrrite(Y)
write-lock on Xwrite (X)
T2 on site S2 needs a resource (x) on site S1, it sends a request to site S1
Since the resource is held by transaction T1 the system inserts an edge T2 T1in the local wfg in site S1.
T2 on site S2 needs a resource (x) on site S1, it sends a request to site S1
Since the resource is held by transaction T1 the system inserts an edge T2 T1in the local wfg in site S1.
T2 T1
Deadlock HandlingDeadlock HandlingT1: write (X)
write (Y)T2: write (Y)
write (X)
write-lock on Xwrite (X)
write-lock on Ywrite (Y)
write-lock on Yrrite(Y)
write-lock on Xwrite (X)
T2 T1 T1 T2
Using local WFG at each site, there is no deadlock detected
If any local wfg has a cycle, deadlock occurs.If any local wfg has a cycle, deadlock occurs.
Deadlock HandlingDeadlock Handling
write-lock on Xwrite (X)
write-lock on Ywrite (Y)
write-lock on Yrrite(Y)
write-lock on Xwrite (X)
T2 T1 T1 T2
Even if there are no cycles in local wfgs, a deadlock exists if the union of the local wfgs (global wfg) contains a cycle
Even if there are no cycles in local wfgs, a deadlock exists if the union of the local wfgs (global wfg) contains a cycle
T1 T2
Deadlock HandlingDeadlock HandlingT1: write (X)
write (Y)T2: write (Y)
write (X)
write-lock on Xwrite (X) write-lock on Y
write (Y)
T1 T2
write-lock on Yrrite(Y)
write-lock on Xwrite (X)
Local Wait-For GraphsLocal Wait-For Graphs
Two sites, each maintaining its local wfg
T2 and T3 appears in both sites indicating the transactions have requested items at both sites.
Local and Global Wait-For GraphsLocal and Global Wait-For Graphs
Local
GlobalDeadlockDeadlock
ComputerNetwork
DB
DB
Site 1
Site 2
Site 3
DB
A single site is appointed as deadlock detection coordinator (DDC)
A single site is appointed as deadlock detection coordinator (DDC)
T
Primary copy
DDC
Deadlock Detection:Centralized ApproachDeadlock Detection:Centralized Approach
(DDC) is responsible for constructing and maintaining the global WFG
(DDC) is responsible for constructing and maintaining the global WFG
ComputerNetwork
DB
DB
Site 1
Site 2
Site 3
DB
Periodically, each lock manager transmits its local WFG to DDC
Periodically, each lock manager transmits its local WFG to DDC
T
Primary copy
DDC
Deadlock Detection:Centralized ApproachDeadlock Detection:Centralized Approach
DDC builds the global WFG and checks for cycles
DDC builds the global WFG and checks for cycles
Are there cycles?
Here is my WFG
Here is my WFG
ComputerNetwork
DB
DB
Site 1
Site 2
Site 3
DB
If one or more cycles exist, the DDC breaks each cycle by selecting transactions to be rolled back and restarted.
If one or more cycles exist, the DDC breaks each cycle by selecting transactions to be rolled back and restarted.
T
Primary copy
DDC
Deadlock Detection:Centralized ApproachDeadlock Detection:Centralized Approach
DDC informs all sites affected by rollbacks and restarts.
DDC informs all sites affected by rollbacks and restarts.
Oh, there is a cycle…Please rollback and restart
Centralized ApproachCentralized Approach
the global wait-for graph can be (re)constructed when: a new edge is inserted in or removed from one of
the local wait-for graphs.
a number of changes have occurred in a local wait-for graph.
the coordinator needs to invoke cycle-detection.
If the coordinator finds a cycle, it selects a victim and notifies all sites. The sites roll back the victim transaction.
Types of WFGTypes of WFG
Types of wfg (due to communication delay)
Real graph: describes the real, but unknown, state of the system at any instance of time
Constructed graph:Approximation generated by the controller during the execution of its algorithm.
Example Wait-For Graph for False CyclesExample Wait-For Graph for False Cycles
Initial state:
False CyclesFalse Cycles Suppose that starting from the
state shown in figure,
1. T2 releases resources at S1 resulting in a message
remove T1 T2 message from the Transaction Manager at site S1 to the coordinator)
2. And then T2 requests a resource held by T3 at site S2
resulting in a message insert T2 T3 from S2 to the coordinator
False CyclesFalse Cycles
Suppose further that the insert message reaches before the delete message
this can happen due to network delays The coordinator would then find a false cycle
after the insert (but before the remove)
T1 T2 T3 T1
The false cycle above never existed in reality.
Example Wait-For Graph for False CyclesExample Wait-For Graph for False Cycles
After insert but before the remove
Unnecessary RollbacksUnnecessary Rollbacks
Unnecessary rollbacks may result when deadlock has indeed occurred and a victim has been picked, and meanwhile one of the transactions was aborted for reasons unrelated to the deadlock.
Unnecessary rollbacks can result from false cycles in the global wait-for graph; however, likelihood of false cycles is low.
Unnecessary RollbacksUnnecessary Rollbacks
Site S1 decides to abort T2
At the same time the coordinator has discovered a cycle and has picked T3 as a victim.
Both T3 and T2 are now rolled back, although only T2 needed to be rolled back.
Replication with Weak ConsistencyReplication with Weak Consistency
Many commercial databases support replication of data with weak degrees of consistency (i.e., without a guarantee of serializabiliy)
Types of replication: Master-slave replication
Multimaster replication
Lazy propagation
master-slave replicationmaster-slave replication:: updates are only performed at a single “master” site, and
propagated to “slave” sites. (update conflicts do not occur)
Propagation is not part of the update transaction: its is decoupled May be immediately after transaction commits May be periodic
Data may only be read at slave sites, not updated No need to obtain locks at any remote site Particularly useful for distributing information
E.g. from central office to branch-office Also useful for running read-only queries offline from the
main database