1 Real-Time Database Systems and Data Services: Issues and Challenges Sang H. Son Department of...

189
1 Real-Time Database Systems and Data Services: Issues and Challenges Sang H. Son Department of Computer Science University of Virginia Charlottesville, Virginia 22903 [email protected]
  • date post

    20-Dec-2015
  • Category

    Documents

  • view

    212
  • download

    0

Transcript of 1 Real-Time Database Systems and Data Services: Issues and Challenges Sang H. Son Department of...

  • Slide 1
  • 1 Real-Time Database Systems and Data Services: Issues and Challenges Sang H. Son Department of Computer Science University of Virginia Charlottesville, Virginia 22903 [email protected]
  • Slide 2
  • 2 Outline Introduction: real-time database systems and real-time data services Why real-time databases? Misconceptions about real-time DBS Paradigm comparison Characteristics of data and transactions in real-time DBS Origins of time constraints Temporal consistency and data freshness Time constraints of transactions Real-time transaction processing Priority assignment Scheduling and concurrency control Overload management and recovery
  • Slide 3
  • 3 Outline (contd) Advanced real-time applications Active, object-oriented, main-memory databases Flexible security paradigm for real-time databases Embedded databases Real-world applications and examples Real-time database projects and research prototypes BeeHive system Research issues, trends, and challenges Exercises
  • Slide 4
  • 4 I. Introduction Outline Motivation: Why real-time databases and data services? A brief review: real-time systems Misconceptions about real-time DBS Comparison of different paradigms: Real-time systems vs real-time database system Conventional DBS vs real-time DBS
  • Slide 5
  • 5 Some Facts about Real-Time Databases Fact 1: As the complexity of real-time systems and application is going up, the amount of information to be handled by real-time systems increases, motivating the need for database and data service functionality (as opposed to ad hoc techniques and internal data structures) Fact 2: Conventional databases do not support timing and temporal requirements, and their design objectives are not appropriate for real-time applications Fact 3: Tasks and transactions have both similarities and distinct differences, i.e., traditional task centric view is not plausible to real- time databases.
  • Slide 6
  • 6 Real-Time Databases and Services: Examples They are used to monitor and control real-world activities Telecommunication systems routers and network management systems telephone switching systems Control systems automatic tracking and object positioning engine control in automobiles Multimedia servers for real-time streaming E-commerce and e-business stock market: program stock trading financial services: e.g., credit card transactions Web-based data services
  • Slide 7
  • 7 Something to Remember... Real-time FAST Real-time nonosecs or secs Real-time means explicit or implicit time constraints A high-performance database which is simply fast without the capability of specifying and enforcing time constraints are not appropriate for real-time applications
  • Slide 8
  • 8 A Brief Review: Real-Time Systems A system whose basic specification and design correctness arguments must include its ability to meet its time constraints. Its correctness depends not only on the logical correctness, but also on the timeliness of its actions.
  • Slide 9
  • 9 Review: Real-Time Systems Characteristics of real-time systems timeliness and predictability typically embedded in a large complex system dependability (reliability) is crucial explicit timing constraints (soft, firm, hard) A large number of applications aerospace and defense systems, nuclear systems, robotics, process control, agile manufacturing, stock exchange, network and traffic management, multimedia computing, and medical systems Rapid growth in research and development workshops, conferences, journals, commercial products standards (POSIX, RT-Java, RT-COBRA, etc)
  • Slide 10
  • 10 Time Constraints dt v(t) v0v0 d2d2 t v0v0 d1d1 Hard and firm deadline Soft deadline
  • Slide 11
  • 11 Databases for Real-Time Systems Critical in real-time systems (any computing needs correct data) real-time computing needs to access data: real-world applications involve time constrained access to data that may have temporal property traditional real-time systems manage data in application- dependent structures as systems evolve, more complex applications require efficient access to more data Function of real-time databases gathering data from the environment, processing it in the context of information acquired in the past, for providing timely and temporally correct response
  • Slide 12
  • 12 What is a Real-Time Database? A real-time database (RTDB) is a data store whose operations execute with predictable response, and with application- acceptable levels of logical and temporal consistency of data, in addition to timely execution of transactions with the ACID properties. C. D. Locke Chief Scientist, TimeSys Co.
  • Slide 13
  • 13 Definitions DB - Database DBS - Database System RTDB - Real-Time Database Temporal consistency - validity of data Logical consistency - integrity of data Durability / Permanence - committed updates must persist Atomicity - no partial results seen or remain (all or nothing) Isolation - each transaction should not be aware of others Serial execution - inefficient but correct execution of concurrent transactions Serializability (SR) - interleaved execution equivalent to serial execution: conflict SR and view SR
  • Slide 14
  • 14 What is the gain of using RTDBS? More efficient way of handling large amounts of data Specification and enforcement of time constraints Improved overall timeliness and predictability Application semantic-based consistency and concurrency control Specialized overload management and recovery Exploitation of real-time support from underlying real-time OS Reduced development costs
  • Slide 15
  • 15 Gain of Using RTDBS (More Specifically) Presence of a schema - avoid redundant data and its description Built-in support for efficient data management - indexing, etc Transaction support - e.g. ACID properties Data integrity maintenance But Not all data in RTDB is durable: need to handle different types of data differently (will be discussed further later) Correctness can be traded for timeliness - Which is more important? Depends on applications, but timeliness is more important in many cases Atomicity can be relaxed: monotonic queries and transactions Isolation of transactions may not always be needed Temporally-correct serializable schedules serializable schedules
  • Slide 16
  • 16 Objectives of Real-Time Databases Correctness requirements: consistency constraints time constraints on data and transactions Objectives timeliness and predictability: dealing with time constraints and violations Performance goals: minimize the penalty resulting from actions either delayed or not executed in time maximize the value accruing to the system from actions completed in time support multiple guarantee levels of quality for mixed workloads
  • Slide 17
  • 17 Why Not Using Conventional Databases? Inadequacies of conventional databases: poor responsiveness and lack of predictability no facility to support for applications to specify and enforce time constraints designed to provide good average response time, while possibly yielding unacceptable worst case execution time resource management and concurrency control in conventional database systems do not support the timeliness and predictability
  • Slide 18
  • 18 Differences from Traditional Databases Traditional database systems persistent data and consistency constraints efficient access to data transaction support: ACID properties correct execution of transactions in the context of concurrent execution and failure designed to provide good average performance Databases for real-time systems temporal data, modeling a changing environment response time requirements from external world applications need temporally coherent view actively pursue timeliness and predictability
  • Slide 19
  • 19 Misconceptions on Real-Time Databases....
  • Slide 20
  • 20 Misconceptions about RTDBS (1) Advances in hardware till take care of RTDBS requirements. fast (higher throughput) does not guarantee timing constraints increase in size and complexity of databases and hardware will make it more difficult to meet timing constraints or to show such constraints will be met hardware alone cannot ensure that transactions will be scheduled properly to meet timing constraints or data is temporally valid transaction that uses obsolete data more quickly is still incorrect Real-time computing is equivalent to fast computing. minimizing average response time vs satisfying individual timing constraints predictability, not speed, is the foremost goal
  • Slide 21
  • 21 Misconceptions about RTDBS (2) Advances in standard DBS technology will take care of RTDB requirements. while novel techniques for query processing, buffering, and commit protocols would help, they cannot guarantee timeliness and temporal validity time-cognizant protocols for concurrency control, commit processing and transaction processing are mandatory There is no need for RTDBS because we can solve all the problems with current database systems adding features such as validity intervals and transaction deadlines to current database systems is in fact moving towards to developing a real-time database system such approach (adding features in ad hoc manner) will be less efficient than developing one from the ground up with such capabilities
  • Slide 22
  • 22 Misconceptions about RTDBS (3) Using a conventional DBS and placing the DB in main memory is sufficient. although main-memory resident database eliminate disk delays, conventional databases have many sources of unpredictability, such as delays due to blocking on locks and transaction scheduling increases in performance cannot completely make up for the lack of time-cognizant protocols in conventional database systems A temporal database is a RTDB. while both of temporal DB and RTDB support time-specific data operations, they support different aspects of time in RTDB, timely execution is of primary concern, while in temporal DB, fairness, resource utilization, and ACID properties of transactions are more important
  • Slide 23
  • 23 Misconceptions about RTDBS (4) Problems in RTDBS will be solved in other areas. some techniques developed in other areas (e.g., RTS and DBS) cannot be applied directly, due to the differences between tasks and transactions, and differences in correctness requirements there are unique problems in RTDBS (e.g., maintaining temporal consistency of data) RTDBS guarantee is meaningless unless H/W and S/W never fails true, in part, due to the complexity involved in predictable and timely execution it does not justify the designer not to reduce the odds of failure in meeting critical timing constraints Reference: Stankovic, Son, and Hansson, Misconceptions About Real- Time Databases, IEEE Computer, June 1999.
  • Slide 24
  • 24 Comparisons of Different Paradigms...
  • Slide 25
  • 25 Notion of Transaction Transaction partially ordered set of database operations a complete and consistent computation (i.e., they are designed to terminate correctly, leaving the database in a consistent state) units of user activity and system recovery have dynamic runtime behavior (dependent on the state of the database, i.e., data values) data is a resource (transaction can be blocked in accessing data objects) preemption may lead to abort
  • Slide 26
  • 26 Conventional vs. Real-Time Transactions Conventional Transactions Logically correct and consistent (ACID): atomicity consistency isolation durability Real-Time Transactions Logically correct and consistent (ACID) Approximately correct trade quality or correctness for timeliness Time correctness time constraints on transactions temporal constraints on data
  • Slide 27
  • 27 Conventional vs. Real-Time Databases: Correctness Criteria Conventional Databases: Logical consistency ACID properties of transactions: Atomicity Isolation Consistency Durability Data integrity constraints Real-Time Database Systems: Logical consistency ACID properties (may be relaxed) Data integrity constraints Enforce time constraints Deadlines of transaction External consistency absolute validity interval (AVI) Temporal consistency relative validity interval (RVI)
  • Slide 28
  • 28 Real-time Systems vs. RTDBS Real-time systems Task centric Deadlines attached to tasks Real-time databases Data centric Data has temporal validity, i.e., deadlines also attached to data Transactions must be executed by deadline to keep the data valid, in addition to produce results in a timely manner
  • Slide 29
  • 29
  • Slide 30
  • 30 II. Characteristics of Data and Transactions Outline The origin of time constraints Types of time constraints Real-time data and temporal consistency Real-time transactions
  • Slide 31
  • 31 The Origin of Time Constraints Meeting time constraints is of paramount importance in real- time database systems. Unfortunately, many of these time constraints are artifacts. If a real-time database system attempts to satisfy them all, it may lead to an over-constrained or over-designed system. Issues to be discussed: 1. What are the origins of (the semantics of) time constraints of the data, events, and actions? 2. Can we do better by knowing the origins of time constraints? 3. What is the connection between time-constrained events, data, and real-time transactions?
  • Slide 32
  • 32 Example #1: Objects on Conveyor Belts on a Factory Floor Recognizing and directing objects moving along a set of conveyer belts on a factory floor. Objects features captured by a camera to determine its characteristics. Depending on the observed features, the object is directed to the appropriate workcell. System updates its database with information about the object.
  • Slide 33
  • 33 Example #1 (contd) Features of an object must be collected while the object is still in front of the camera. Current object and features apply just to the object in front of the camera Lose validity once a different object enters the system. Objects features matched against models in database. Based on match, object directed to selected workcell. Alternative: discard object and later bring it back again in front of the camera.
  • Slide 34
  • 34 Example #2: Air Traffic Control System makes decisions concerning incoming aircrafts flight path the order in which they should land separation between landings Parameters: position, speed, remaining fuel, altitude, type of aircrafts and current wind velocity. Aircraft allowed to land => subsequent actions of this aircraft become critical: cannot violate time constraints Alternative: Ask aircraft to assume a holding pattern.
  • Slide 35
  • 35 Factors that Determine Time Constraints Focus: externally-imposed temporal properties The characteristics of the physical systems being monitored and controlled: speed of the aircraft, speed of conveyer belt, temperature and pressure The stability characteristics as governed by its control laws: servo control loops of robot hands, fly-by-wire, avionics, fuel injection rate Quality of service requirements: sampling rates for audio and video, accuracy requirement for results Human (re)action times, human sensory perception: time between warning and reaction to warning Events, data and actions inherit time constraints from these factors They determine the semantics (importance, strictness) of time constraints.
  • Slide 36
  • 36 All Time Constraints are Artifacts? May be not all of them, but even many externally-imposed constraints are artifacts: Length of a runway or speed of an aircraft - determined by cost and technology considerations; Quality of service requirements - decided by regulatory authorities; Response times guaranteed by service providers - determined by cost and competitiveness factors
  • Slide 37
  • 37 Designer Artifacts Subsequent decisions of the database system designer introduce additional constraints: The type of computing platform used (e.g. centralized vs. distributed) The type of software design methodology used (e.g., data- centric vs. action-centric) The (pre-existing) subsystems used in composing the system The nature of the actions (e.g., monolithic action vs. graph- structured or triggered action) Time constraints reflect the specific design strategy and the subsystems chosen as much as the externally imposed timing requirements
  • Slide 38
  • 38 Decisions on Time Constraints Difficulty of optimal time constraints Determining all related time constraints in an optimal fashion for non-trivial systems is intractable => divide and conquer (and live with acceptable decisions) Multi-layer decision process The decisions made at one level affect those at the other level(s) While no decision at any level is likely to be unchangeable, cost and time considerations will often prevent overhaul of prior decisions
  • Slide 39
  • 39 Decisions on Time Constraints (2) Decisions to be made Whether an action is periodic, sporadic, or aperiodic The right values for the periods, deadlines, and offsets within periods Importance or criticality values Flexibility (dynamic adaptability) of time constraints
  • Slide 40
  • 40 Time Constraints of Events Three basic types of time constraints 1. Maximum: delay between two events Example: Once an object enters the view of the camera, object recognition must be completed within t1 seconds 2. Minimum: delay between two events Example: No two flight landings must occur within t2 seconds 3. Durational: length of an event Example: The aircraft must experience no turbulence for at least t3 seconds before the seat-belt sign can be switched off once again Constraints can specify between stimulus and response events (max, min, and duration between them can be stated)
  • Slide 41
  • 41 Time Constraints of Events (2) The maximum and minimum type of time constraints of recurring (stimulus) events: rate-based constraints Time constraints determine the constraints on transactions: Rate-based constraints -> periodicity requirements for the corresponding actions Time constraints relating a stimulus and its response - > deadline constraints Specifications of minimal separation between response to a stimulus and the next stimulus -> property of the sporadic activity that deals with that stimulus
  • Slide 42
  • 42 Data in Real-Time Database Systems Data items reflect the state of the environment Data from sensors - e.g., temperature and pressure Derived data - e.g., rate of reaction Input to actuators - e.g., amount of chemicals, coolant Archival data - e.g., history of (interactions with) environment Static data as in conventional database systems
  • Slide 43
  • 43 Time Constraints on Data Where do they come from? state of the world as perceived by the controlling system must be consistent with the actual state Requirements timely monitoring of the environment timely processing of sensed information timely derivation of needed data Temporal consistency of data absolute consistency: freshness of data between actual state and its representation relative consistency: correlation among data accessed by a transaction
  • Slide 44
  • 44 Representation of Temporal Consistency Absolute consistency: Absolute Validity Intervals (AVI) state of environment and reflection in DB d : (value, avi, ts) value = current state of d ts = time of observation avi = absolute validity interval (current_time - ts) avi Relative consistency: Relative Validity Intervals (RVI) among data used to derive other data d D, ts(d) - ts(d) Drvi d D, D: relative consistency set for a transaction
  • Slide 45
  • 45 Static Data and Real-Time Data Static data data in a typical database values not becoming obsolete as time passes Real-time (Temporal) data arrive from continuously changing environment represent the state at the time of sensing has observed time and validity interval users of temporal data need to see temporally coherent views of the data (state of the world) When must the data be temporally consistent? ideally, at all times in practice, only when they are used by transactions
  • Slide 46
  • 46 An Example Data object is specified by (value, absolute validity interval, time-stamp) Interested in {temperature and pressure} with relative validity interval of 5 Let current time = 100 temperature = (347, 10, 95) and pressure = (50, 20, 98) -- temporally consistent temperature = (347, 10, 98) and pressure = (50, 20, 91) -- temporally inconsistent
  • Slide 47
  • 47 What Makes the Difference? We have a set of predicates to be satisfied by data Why not use standard integrity maintenance techniques? Not executing a transaction will maintain logical consistency, but temporal consistency will be violated Satisfy logical consistency by CC techniques, such as 2PL Satisfy temporal consistency by time-cognizant transaction processing AVI and RVI may change with system dynamics, e.g. mode changes
  • Slide 48
  • 48 Time Constraints Associated with Actions Time constraints dictate the behavior of the environment constrain the rates and times at which inputs arrive at the system Example: seek permission to land only when aircraft is 10 mins from airport Time constraints prescribe performance of the system dictate the responsiveness of the system to these inputs Example: respond to a landing request within 30 seconds Time constraints are imposed to maintain data temporal consistency Example: actions that update an aircrafts dynamic parameters in 1 second
  • Slide 49
  • 49 Distinct Types of Transactions Write-only transactions (sensor updates): obtain state of the environment and write into the database store sensor data in database (e.g., temperature) monitoring of environment ensure absolute temporal consistency Update transactions (application updates) derive new data and store in database based on sensor and other derived data Read-only transactions read data, compute, and report (or send to actuators)
  • Slide 50
  • 50 Time Constraints on Transactions Time constraints on transactions some come from the need to maintain temporal consistency of data some come from the requirements on reaction time, dictating the responsiveness of the system some come from the designers choice, specifying the rates and times at which inputs arrive at the system transactions value depends on completion time
  • Slide 51
  • 51 Types of Time Constraints Based on type of time constraints: Periodic - Every 10 secs Sample wind velocity - Every 20 secs Update robot position Aperiodic - If temperature > 1000 within 10 secs add coolant to reactor Based on Value: Hard: must execute before deadline Firm: abort if not completed by deadline Soft: diminished value if completed after deadline
  • Slide 52
  • 52 Dealing with Time Constraint Violations Large negative penalty => a safety-critical or hard time constraint typically arise from external considerations important to minimize the number of such constraints No value after the deadline and no penalty accrues => a firm deadline typically, alternatives exist Result useful even after deadline => a soft deadline system must reassign successors parameters - so that the overall end-to-end time constraints are satisfied Firm and soft time constraints offer the system flexibility - not present with hard or safety-critical time constraints
  • Slide 53
  • 53 Examples of Time Constraints Specified using ECA (Event-Condition-Action) Rules The time constraints can be specified using ECA rules ON (10 seconds after initiating landing preparations) IF (steps not completed) DO (within 5 seconds abort landing) ON (deadline of object recognition) IF (action not completed) DO (increase importance, adjust deadlines) ON (n-th time violation within 10 secs) IF (crisis-mode) DO (drop all non-essential transactions)
  • Slide 54
  • 54 Time Constraints: Discussion Understand the issues underlying the origin and semantics of time constraints not all deadlines are given. need ways to deriving time constraints (and semantics) in the least stringent manner flexibility afforded by derived deadlines must be exploited deadline violation must also be handled adaptively Control strategies can be specified by ECA rules
  • Slide 55
  • 55
  • Slide 56
  • 56 III. Real-Time Transaction Processing Outline Priority assignment Scheduling paradigms Priority inversion problem Concurrency control protocols Predictability issues Overload management and recovery
  • Slide 57
  • 57 Priority Assignment Different approaches EDF: earliest deadline first highest value (benefit) first highest (value/computation time) first complex function of deadline, value, slack time Priority assignment has significant impact on database system performance Assignment based on deadline and value has shown good performance
  • Slide 58
  • 58 Non-Real-Time Scheduling Level: operating systems, database systems, etc. Primary Goal: maximize performance Secondary Goal: ensure fairness Typical metrics: minimize response time maximize throughput e.g., FCFS (First-Come-First-Served), RR (Round-Robin), fair-share scheduling
  • Slide 59
  • 59 Goals of Real-Time Transaction Scheduling Maximize the number of transactions (both sensor and user) that meet deadlines Keep data temporally valid on overload, allow invalid intervals on data (note that data with invalid interval may not be used during that invalid time) overload management by trading off quality for timeliness and schedule contingency (or alternative) versions of transactions more on overload management later...
  • Slide 60
  • 60 Execution Time of Transactions t exec = t db + t I/O + t int + t appl + t comm t db = processing of DB operations (variable) t I/O = I/O processing (variable) t int = transaction interference (variable) t appl = non-DB application processing (variable & optional) t comm = communication time (variable & optional)
  • Slide 61
  • 61 Scheduling Paradigms Scheduling analysis or feasibility checking of real-time computations can predict whether timing constraints will be met Several scheduling paradigms emerge, depending on whether a system performs schedulability analysis if it does, whether it is done statically or dynamically, and whether the result of the analysis itself produces a schedule or plan according to which computations are dispatched at run- time
  • Slide 62
  • 62 Different Paradigms 1. Static Table-Driven approaches: Perform static schedulability analysis The resulting schedule is used at run-time to decide when a computation must begin execution 2. Static Priority Driven Preemptive Approaches: Perform static schedulability analysis but unlike in the previous approach, no explicit schedule is constructed At run-time, computations are executed (typically) highest- priority- first Example: rate-monotonic priority assignment - priority is assigned proportional to frequency
  • Slide 63
  • 63 Different Paradigms (2) 3. Dynamic Planning Based Approaches: Feasibility is checked at run-time, i.e. a dynamically arriving computation is accepted for execution only if it found feasible (that is, guaranteed to meet its time constraints) One of the results of the feasibility analysis is a schedule or plan that is used to decide when a computation can begin execution. 4. Dynamic Best-effort Approaches: No feasibility checking is done The system tries to do its best to meet deadlines, but since no guarantees are provides, a computation may be aborted during its execution
  • Slide 64
  • 64 Dealing with Hard Deadlines All transactions have to meet the timing constraints best-effort is not enough a kind of guarantee is required Requires periodic transactions only resource requirements known a priori worst-case execution time of transactions are known Use static table-driven or priority-driven approach schedulability analysis is necessary run-time support also necessary
  • Slide 65
  • 65 Dealing with Soft/Firm Deadlines Two critical functions: assign transaction priorities resolve inter-transaction conflicts using transaction parameters: deadline, criticality, slack time, etc. For firm deadlines, abort expired transactions For soft deadlines, the transaction is continued to finish in general, even if the deadline is missed Various time-cognizant concurrency controls developed, many of which are extensions of two-phase locking (2PL), timestamp, and optimistic concurrency control protocols
  • Slide 66
  • 66 Time-cognizant Transaction Scheduling Earliest deadline first (EDF) Highest value first Highest value density first (value per unit computation time) Weighted formula: complex function of deadline, value, and remaining work, etc. Earliest Data Deadline First: considering the validity interval Example: DD(Y) is used as the virtual deadline of transaction T Activate TR T Begin TR T Read XRead Y Deadline of TR T DD(X)DD(Y)
  • Slide 67
  • 67 Example 1 : Commit Case Activate TR T Begin TR T Read XRead Y Deadline of TR T Commit X and Y are valid TR T makes deadline DD(X)DD(Y) DD = Data deadline
  • Slide 68
  • 68 Example 2 : Abort Case Activate TR T Begin TR T Read XRead Y Deadline of TR T DD(Y) DD(X) ABORT
  • Slide 69
  • 69 Example 3 : Forced Wait Activate TR T Begin TR T Read X Read Y Deadline of TR T DD(X)DD(Y) Force TR T to Wait for Update to Y since it will occur soon!
  • Slide 70
  • 70 Example 4 : With Data Similarity Activate TR T Begin TR T Read X Read Y 15.70 Deadline of TR T DD(X) Commit Deadline of TR T is met Data X is OK Data Y is similar (defined in DB) DD(Y) - Y updated to 15.78
  • Slide 71
  • 71
  • Slide 72
  • 72 Transactions: Concurrency Control Pessimistic Optimistic (OCC) Hybrid (e.g., integrated real-time locking) Speculative Semantic-based Priority ceiling
  • Slide 73
  • 73 Pessimistic Concurrency Control Locks are used to synchronize concurrent actions Two-Phase Locking (2PL) all locking operations precedes the first unlock operation in the transaction expanding phase (locks are acquired) shrinking phase (locks are released) suffers from deadlock priority inversion
  • Slide 74
  • 74 Example of 2PL: Two transactions T 1 : write_lock (X); read_object (X); X = X + 1; write_object (X); unlock (X); Priority T 1 > Priority of T 2 T 2 : read_lock (X); read_object (X); write_lock (Y); unlock (X); read_object (Y); Y = X + Y; write_object (Y); unlock (Y);
  • Slide 75
  • 75 Example of 2PL: Deadlock T1: read_lock (X); read_object (X); write_lock (Y); [blocked] : => DEADLOCK ! T2: read_lock (Y); read_object (Y); write_lock (X); [blocked] :
  • Slide 76
  • 76 Conflict Resolution in 2PL 2PL (or any other locking schemes) relies on blocking requesting transaction if the data is already locked in an incompatible mode. What if a high priority transaction needs a lock held by a low priority transaction? Possibilities are... let the high priority transaction wait abort the low priority transaction let low priority transaction inherit the high priority and continue execution The first approach will result in a situation called priority inversion Several conflict resolution techniques are available, but the one that use both deadline and value show better performance
  • Slide 77
  • 77 Priority Inversion Problem in Locking Protocols What is priority inversion? A low priority transaction forces a higher priority transaction to wait highly undesirable in real-time applications unbounded delay may result due to chained blocking and intermediate blocking: Example: T 0 is blocked by T 3 for accessing data object, then T 3 is blocked by T 2 (priority T 0 > T 2 > T 3 )
  • Slide 78
  • 78 Example of 2PL: Priority Inversion T 1 : write_lock (X); [blocked] read_object (X); X = X + 1; write_object (X); unlock (X); T 2 : read_lock (X); read_object (X); write_lock (Y); unlock (X); read_object (Y); Y = X + Y; write_object (Y); unlock (Y); time Priority inversion
  • Slide 79
  • 79 Solutions to Priority Inversion Problem Priority abort abort the low priority transaction - no blocking at all quick resolution, but wasted resources Priority inheritance execute the blocking transaction (low priority) with the priority of the blocked transaction (high priority) intermediate blocking is eliminated Conditional priority inheritance based on the estimated length of transaction inherit the priority only if blocking one is close to completion; abort it, otherwise
  • Slide 80
  • 80 Conditional Priority Inheritance Protocol Ti requests data object locked by Tj if Priority (Ti) < Priority (Tj) then block Ti else if (remaining portion of Tj > threshold) abort Tj else Ti waits while Tj inherit the priority of Ti to execute
  • Slide 81
  • 81 Why Conditional Priority Inheritance? Potential problems of (blind) priority inheritance: life-long blocking - a transaction may hold a lock during its entire execution (e.g., strict 2PL case) a transaction with low priority may inherit the high priority early in its execution and block all the other transactions with priority higher that its original priority especially severe if low priority transactions are long Conditional priority inheritance is a trade-off between priority inheritance and priority abort Not sensitive to the accuracy of the estimation of the transaction length
  • Slide 82
  • 82 Performance Results Priority inheritance does reduce blocking times. However, it is inappropriate under strict 2PL due to life-time blocking of the high priority transaction. It performs even worse than simple waiting when data contention is high Priority abort is sensitive to the level of data contention Conditional priority inheritance is better than priority abort when data contention becomes high Blocking is a more serious problem than resource waste, especially when deadlines are not tight In general priority abort and conditional priority inheritance are better than simple waiting and priority inheritance Deadlock detection and restart policies appear to have little impact
  • Slide 83
  • 83 Optimistic Concurrency Control No checking of data conflicts during transaction execution read phase: read values from DB; updates made to local copies validation phase backward validation or forward validation conflict resolution write phase: if validation ok then local copies are written to the DB otherwise discard updates and (re)start transaction Non-blocking Deadlock free Several conflict resolution policies
  • Slide 84
  • 84 OCC: Validation phase If a transaction Ti should be serialized before a transaction Tj, then two conditions must be satisfied: Read/Write rule Data items to be written by Ti should not have already been read by Tj Write/Write rule Tis should not overwrite Tjs writes
  • Slide 85
  • 85 OCC Example T 1 : read_object (X); X = X + 1; write_object (X);validation T 3 : read_object (Y); Y = Y + 1; write_object (Y);... T 2 : read_object (X); read_object (Y); Y = X + Y; write_object (Y);validation
  • Slide 86
  • 86 OCC: Conflict Resolution When a transaction T is ready to commit, any higher-priority conflicting transaction is included in the set H Broadcasting commit (no priority consideration) T always commits and all conflicting transactions are aborted With priority consideration: if H is non-empty, 3 choices sacrifice policy: T is always aborted wait policy: T waits until transactions in H commits; if they do commit, T is aborted wait-X policy: T commits unless more than X% of conflicting transactions belong to H
  • Slide 87
  • 87 OCC: Comparison Broadcasting commit (no priority consideration) not effective in real-time databases Sacrifice policy: wasteful theres no guarantee the a transaction in H will actually commit; if all in H abort, T is aborted for nothing Wait policy: address the above problem if commit after waiting, it aborts lower priority transactions after waiting, which may have not enough time to restart and commit the longer T stays, the higher the probability of conflicts Wait-X policy: compromise between sacrifice and wait X=O: sacrifice policy; X=100: wait policy performance study shows X=50 gives the best results
  • Slide 88
  • 88 Priority Ceiling Protocol Why? to provide blocking at most once property the system can compute (pre-analyze) the worst case blocking time of a transaction, and thus schedulability analysis for a set of transaction is feasible A complete knowledge of data and real-rime transactions necessary: for each data object, all the transactions that might access it need to be known true in certain applications (hard real-time applications) not applicable to other general applications
  • Slide 89
  • 89 Priority Ceiling Protocol For each data object O: write-priority ceiling: the priority of the highest priority transaction that may write O absolute priority ceiling: the priority of the highest priority transaction that may read or write O r/w priority ceiling: dynamically determined priority which equals absolute priority ceiling if O is write-locked; equals write priority ceiling if O is read locked Ceiling rule: transaction cannot lock a data object unless its priority is higher that the current highest r/w priority ceiling locked by other transactions Inheritance rule: low priority transaction inherits the higher priority from the ones it blocks Good predictability but high overhead
  • Slide 90
  • 90
  • Slide 91
  • 91 Overload Management and Recovery...
  • Slide 92
  • 92 Managing Overloads The result of overload is a slow response for the duration of the overload In real-time databases, catastrophic consequences may arise: hard real-time transactions must be guaranteed to meet deadlines under overloads transaction values must be considered when deciding which transactions to shed missing too many low-valued transactions with soft deadlines may eventually degrade system performance Dealing with overloads is complex and practical solutions are needed
  • Slide 93
  • 93 Quality-Timeliness Tradeoffs during Overload Achieve timeliness by trading off completeness: approximate processing by sampling data and extrapolation consistency: relax correctness requirements in controlled manner (e.g., epsilon-serializability, similarity) currency: process transactions using older versions of data (within the tolerance range of the application) precision: use algorithms that produce lower precision results within the deadline Exploit concepts from imprecise computing, monotonic computing, mandatory/optional structures, multi-precision algorithms, primary/contingency transactions, etc.
  • Slide 94
  • 94 Scheduling for Overload Management Background Dynamic real-time systems are prone to transient overloads requests for service exceed available resources for a limited time causing missed deadlines may occur when faults in the computational environment reduce computational resource available to the system may occur when emergencies arise which require computational resources beyond the capacity of the system Overloads cause performance degradation Schedulers are generally not overload-tolerant
  • Slide 95
  • 95 Scheduling for Overload Management (2) Resource management has two components: scheduling and admission control Scheduling determines the execution order of admitted transactions, which might not be enough to handle the current overload situation Admission control determines which transactions should be granted system resources To resolve transient overloads, the system needs both when to execute transactions and selecting which transactions to be executed (original, alternative, or contingency transaction, if the transaction is accepted)
  • Slide 96
  • 96 Scheduling for Overload Management (3) Goal: dynamic overload management with graceful performance degradation (meeting all critical deadlines) Problem: need to handle complex workload critical and non-critical transactions -- some are sporadic and others aperiodic (no minimum inter-arrival time information available) non-critical transactions can be discarded in a controlled manner while critical transactions are replaced by alternative or contingency transactions (with shorter execution time) resources are reallocated among transactions that are admitted to the system using value-functions
  • Slide 97
  • 97 Scheduling for Overload: Assumptions Transaction and Workload: Critical transactions are sporadic and have a corresponding contingency transaction Non-critical transactions are aperiodic Each transaction is pre-declared and pre-analyzed with known worst case execution time Critical deadlines must be guaranteed even under overload conditions System Characteristics: Dedicated CPU for scheduling activities is desirable; otherwise, only simple policies can be implemented
  • Slide 98
  • 98 Scheduling Module Scheduler consists of several components Pre-analysis of schedulability: critical transactions are pre-analyzed to check whether they can be executed properly and how much reduction in resource requirement can be achieved by using contingency transactions Admission controller determines which transactions will be eligible for scheduling Scheduler can schedule according to different metric deadline-driven value-driven Overload Resolver decides the overload resolution actions Dispatcher patches from the top of the ready queue (highest priority)
  • Slide 99
  • 99 Scheduling Components
  • Slide 100
  • 100 Overload Resolution Strategies Admission Controller: lReject transaction lAdmit contingency action Scheduler: lDrop transaction (firm/soft) lReplace transaction with contingency action (hard) lPostpone transaction execution (soft)
  • Slide 101
  • 101 Recovery Issues Recovery of temporal as well as static data necessary Not always necessary to recover original state because of temporal validity intervals and application semantics: if recovery takes longer than the absolute validity interval, it would be a waste to recover that value example: recovery from a telephone connection switch failure if connection already established: recover billing information and resources, but no need to recover connection information if connection was being established: recover assigned resources
  • Slide 102
  • 102 Recovery Issues (2) Real-time database recovery must consider time and resource availability: recovery must not jeopardize ongoing critical transactions available transaction semantics (or state): contingency or compensating transactions can be used state of the environment: extrapolation of state may be possible, or more up-to-date data may be available from the sensor soon Its appropriate to use partitioned and parallel logging so that critical data with short validity interval can be recovered first, without going through the entire log classify data according to its update frequency and importance, and utilize non-volatile high-speed memory for logging
  • Slide 103
  • 103
  • Slide 104
  • 104 IV. Database Techniques for Real-Time Applications Outline Active, main-memory, and object-oriented databases Flexible security paradigm for real-time applications Embedded databases
  • Slide 105
  • 105 Active Database Systems Database manager reacts to events Transactions can trigger other transactions Triggers and alerters Actions specified as rules ECA-rules (event - condition - action) Upon Event occurrence, evaluate Condition, and if condition is satisfied, trigger Action Coupling Modes: immediate (triggered action will be executed right away), deferred (it will be executed at the end of the current transaction), detached (scheduled as a separate transaction) Cascaded triggering is possible
  • Slide 106
  • 106 Active Real-Time Database Systems Real-time systems are inherently reactive ECA-rules provide uniform specification of reactive behavior Problems with active database systems techniques: Additional sources of unpredictability event detection and rule triggering all coupling modes are not feasible No specification of time constraints Techniques are not time-cognizant
  • Slide 107
  • 107 Total response time Event detection timeRule execution timeAction exec. time time event occurrence event detection composite event detection event delivery rule retrieval action execution start action spawning condition evaluation action execution complete Temporal scope...
  • Slide 108
  • 108 Main Memory RTDBS Characteristics of Main Memory DBS the primary copy of data resides in main memory, not in disks as in conventional database systems memory resident data need a backup copy on disk Why being pursued? it becomes feasible to store larger databases in memory as memory becomes cheaper and chip densities increase direct access in memory provides much better response time and transaction throughput, and improved predictability due to the lack of I/O operations
  • Slide 109
  • 109 Main Memory RTDBS (2) Difference from disk-resident databases with large cache it can use index structures specially designed for memory resident data (e.g., T-tree instead of B-tree) it can recognize and exploit the fact that some data reside permanently in memory -> data partitioning Data partitioning can be effectively used different classes of data: hot, warm, cool, and cold, based on the frequency of access and/or timing constraints of the access (deadline of the transactions) in telephone switching systems, for example, routing information is hot, while customer address data is cold
  • Slide 110
  • 110 Main Memory RTDBS (3) Consequences of memory board failures in MMDBS typically, entire machine need to be down, losing the entire DB, while in disk-resident DB, only the affected portion will be unavailable during recovery recovery is time-consuming, and having a very recent backup available is highly preferred -> more backups need to be taken frequently, resulting in high cost --- performance of backup mechanism is critical
  • Slide 111
  • 111 Impacts of Memory-Residency in RTDBS Concurrency control since lock duration is short, using small locking granules to reduce contention is not effective --- large lock granules are appropriate in MM-RTDBS even serial execution can be a possibility, eliminating the cost of concurrency control potential problems of serial execution: long transactions to run concurrently with short ones need synchronization for multiprocessor systems lock information can be included in the data object, reducing the number of instructions for lock handling --- performance improvement
  • Slide 112
  • 112 Impacts of Memory-Residency in RTDBS (2) Commit processing to protect against failures, logging/backup necessary --- log/backup must reside in stable storage (e.g., disks) before a transaction commits, its activity must be written in the log: write-ahead logging (WAL) logging threatens to undermine performance advantage: response time: transaction must wait until logging is done on disk -> logging can be a performance bottleneck possible solutions: small in-memory log, using non-volatile memory (e.g., flash memory) pre-commit and group commit strategy
  • Slide 113
  • 113 Impacts of Memory-Residency in RTDBS (3) Query processing sequential access is not significantly faster than random access for memory-resident data -> techniques taking advantage of faster sequential access lose merit query processor must focus on actual processing costs, instead of minimizing the number of disk access costly operations such as index creation or data copying should be identified first, and then processing strategy can be designed to reduce their occurrences because of memory residency of data, it is possible to construct compact data structures for speed up --- e.g., join operation using pointers
  • Slide 114
  • 114 Trends in Memory-Resident RTDBS Extended use of pointers relations are stored as linked list of tuples, which cane be arrays of pointers to attribute values Combined hashing and indexing: linear hashing for unordered data and T-tree for ordered data Large lock granules or multi-granule locks Deferred updates, instead of in-place update Fuzzy checkpointing for reduced transaction locking time Special-purpose hardware support for logging and recovery Object-oriented design and implementation
  • Slide 115
  • 115 Object-Oriented RTDBS OO data models support support for modeling, storing and manipulation of complex objects and semantic information into databases encapsulated objects OO data models need (for RT applications) time constraints on objects, i.e, attributes and methods Objects more complex -> unit of locking is the object -> less concurrency memory-resident RTDB may fit well with this restriction inter-object consistency management could be difficult Need better solutions to provide higher concurrency and predictable execution for RT applications
  • Slide 116
  • 116
  • Slide 117
  • 117 Real-Time Security Paradigm...
  • Slide 118
  • 118 Real-Time Secure Data Management Characteristics transactions with timing constraints data with temporal properties mixture of sensitive and unclassified data Requirements timeliness and predictability temporal consistency adaptive security enforcement high performance
  • Slide 119
  • 119 Real-Time Secure Data Management Issues integrate support of different types of requirements predictability yet flexible execution conflicts between real-time and security real-time management of resources high performance yet fault-tolerant trade-offs scalability of solutions
  • Slide 120
  • 120 Database Security Security services to safeguard sensitive information encryption, authentication, intruder detection... Multilevel security (MLS) objects are assigned with security classification subjects access objects with security clearance no flow of information from higher level to lower one Applications almost everywhere (becoming a buzzword) more flexibility necessary (from static, known environment to dynamic unknown environment)
  • Slide 121
  • 121 Trends Increasing number of systems operate in unpredictable (even hostile) environments task set, resource requirements (e.g., worst-case execution time)... High assurance required for performance-critical applications System properties for high assurance real-time (timeliness, temporal consistency..) security (confidentiality, authentication..) fault-tolerance (availability, reliability..) Each property has been studied in isolation
  • Slide 122
  • 122 Security and Real-Time For timeliness, no priority inversion in real-time applications tasks with earlier deadline or higher criticality has higher priority for better service In secure systems, no security violation is allowed Incompatible under the binary notion of absolute security priority inversion vs security violation Higher security services require more resources
  • Slide 123
  • 123 Example of the Problem Both require lock on the resource How to resolve this conflict? if lock is given to T 1, security violation if lock is given to T 2, priority inversion T1T1 - high priority - high security T2T2 - low priority - low security Access Resource
  • Slide 124
  • 124 Requirement for Real-Time Secure DBS Supporting both requirements of real-time and security for real-time databases: How to provide acceptably high security while remains available and provides timely services?
  • Slide 125
  • 125 Research Issues Flexible security vs absolute security paradigm for flexible security services identifying correct metrics for security level Adaptive security policies Mechanisms to enforce required level of security and trading-off with other requirements: access control, authentication, encryption,.. time-cognizant protocols, data deadlines,... replication, primary-backup,... Specification to express desired system behavior verification of consistency/completeness of specification
  • Slide 126
  • 126 Flexible Security Services Flexible vs absolute (binary) security traditional notion of security is binary: secure or not problem of binary notion of security: difficult to provide acceptable level of security to satisfy other conflicting requirements research issue: quantitative flexible security levels One approach represent in terms of % of potential security violations problem: not precise --- percentage alone reveals nothing about implications on system security e.g., 1%violation may leak most sensitive data out
  • Slide 127
  • 127 Flexible Security for Access Control Possible approaches to provide flexible security control potential violations between certain security levels even if it allows potential security violations, it does not completely compromise the security of the system use different algorithms in an adaptive manner A possible configuration Top secret Secret Confidential Unclassified A Top secret Secret Confidential B Top secret Secret Confidential Unclassified CD
  • Slide 128
  • 128 Flexible Security Policies (5 levels) Completely secure: no violations allowed Secure levels 2, 3 & 4: high 3 levels kept completely secure Secure levels 3 & 4: high 2 levels kept completely secure Split security: violations allowed between top 2 levels, and among low 3 levels Secure level 4: highest level kept completely secure No security: violations can occur between any levels Gradual security: control the number of violation between each level
  • Slide 129
  • 129 Performance of Flexible Access Control Significant improvement in real-time performance as more potential covert channels allowed: completely secure (6.5%) vs no security (3.3%) for 500 data items complete secure (5%) vs no security (1%) for 1000 data items Trade-off capacities of security policies are strictly ordered: from completely secure through multiple secure levels to no security
  • Slide 130
  • 130 Improved Functionality Exploiting real-time properties for improved/new features Example: Intrusion detection sensitive data objects are tagged with time semantics that capture normal behavior about read/update time semantics should be unknown to intruder violation of security policy can be detected: suspicious update request can be detected using a periodic update rate tolerance in the deviation from normal behavior can be parameterized
  • Slide 131
  • 131 Adaptable Security Manager Need for resource tradeoffs in database services Adaptable security manager fits well with the concept of multiple service levels in real-time secure database systems Short term relaxation of security could be preferable to missed critical deadlines aircraft attack warning during burst of battlefield updates loss of production time for missed agile manufacturing command
  • Slide 132
  • 132 Features of Adaptable Security Manager Multiple security levels on users/objects or communications computation costs increase with level of security Client negotiated range of security levels for transaction communications Dynamic level changes as a function of real-time load
  • Slide 133
  • 133 Security Manager Environment session & transaction requests Security Manager Client Table Session Table Beehive TransData transaction results thread n thread n-1 DB Scheduler Mapper/ Admission Control data flow execution control transaction object & session data client security level & key session keys & status transaction handoff object read & write
  • Slide 134
  • 134 Security Level Synchronization Sec Mgr events Client X events Client X level Sn Sec Mgr level 32103210 Rn+1Sn Sn+1 Rn prepare for 2 step switch Sn+2 Rn+1 prepare to switch last message accounted for Rn+2 Sn+2 switch Sn+3 Rn+3 received acknowledgment time LEGEND Sn Rn transaction request request with switch acknowledgment transaction response message response with switch command send the nth message receive the nth message t1 t2t3t4 t5 Rn 32103210
  • Slide 135
  • 135 Performance: Adaptive vs. Non-Adaptive In adaptive control, the system lowers the security dynamically
  • Slide 136
  • 136 Level Switching (100% adaptive client) 3 2 1 0 LEVELLEVEL % MADE LEVEL It shows the security level change and the miss ratio change
  • Slide 137
  • 137 Performance Results Good performance gains achievable in soft real-time system during overload conditions When the overload is not severe, switching the security level can bring the desired performance back (as shown in the graph) If the system is too much overloaded or some component failed, then even reducing the security level to 0 cannot keep the system working properly (meeting critical deadlines) Performance gain depends also on other factors such as message size and I/O cost: significant performance improvement with large message sizes with large I/O overhead
  • Slide 138
  • 138
  • Slide 139
  • 139 Embedded Real-Time Databases....
  • Slide 140
  • 140 Whats an Embedded Database? Same principal functionality as a desktop database (exluding the most complex operations) Two types: Application-embedded databases Generally not much real-time requirements Device-embedded databases Embedded systems Strict timing constraints involved
  • Slide 141
  • 141 Requirements of Device-Embedded DBS Small footprint due to limitied storage and memory resources ~150 Kb High dependability continous uptime with little or no maintenance (i.e., the database should be able to perform recovery by itself). Mobile capabilities Interoperability and portability Communication and synchronization with external databases Real-time constraints Maintainability Security
  • Slide 142
  • 142 Market Analysis Customers: Embedded databases are sold to manufacturers and resellers for inclusion in devices and applications Potential: Expected to be the fastest growing segment of database market Market trend: Device-embedded databases increase Application-embedded database growth is only moderate
  • Slide 143
  • 143 Expected Market Growth Sales revenue (millions of US dollars) Source: Dataquest
  • Slide 144
  • 144 Existing Embedded Databases Progress Ardent Software InterSystems Centura SQLBase Embedded Database IBM DB2 Everywhere and Universal Database Satellite Microsoft SQL Server Oracle8i Lite Sybase SQL Anywhere Studio (and UltraLite Deployment Option) Pervasive.SQL 2000 SDK for Mobile and Embedded Devices
  • Slide 145
  • 145 Features/Properties of Current Commercial Embedded Database Systems Down-scaled version of the full-sized versions, i.e., still a conventional database Primarily targeted for general-purpose applications that require DBs No explicit support for real-time features
  • Slide 146
  • 146 Embedded Database Systems Applications Mobile databases Portable computing devices Smart ceullular phones with Internet access PDAs Laptops Embedded systems Car engine control, brake system control,... Tiny embedded databases Smart cards Intelligent appliances Network routers and hubs Set-top Internet-access boxes
  • Slide 147
  • 147 Embedded DBS Applications (2) Process control Log sensor data and then upload to central data warehouse Routers Routing tables Special query operators, collapsing database entries Web-based data services Increasing demands of QoS (time constraints) Security secure transactions
  • Slide 148
  • 148 Embedded DBS: Research Challenges Portability and interoperability Availability Recovery protocols that recover the database while the database is still guaranteeing some level of service Continuous up-time Query language What are the necessary operators (application dependent) Concurrency control schemes Architecture Building a database from a portfolio of modules (components) Application-dependent tuning of functionality and configuration Minimizing functionality -> minimizing memory usage
  • Slide 149
  • 149 Execution Time of Transaction T_exec = T_db + T_io + T_block + T_appl + T_com T_db: Time for executing database operations Variable (depends on state of DB) Options: Upper bound on maximum size Dynamic estimate based on statistics Dynamic measurement based on preexecution One-size-fits-all not applicable in RT-environment tailored real-time database systems unbundling of time-cognizant functionality
  • Slide 150
  • 150
  • Slide 151
  • 151 V. Applications of Real-Time Databases: Real-World Requirements
  • Slide 152
  • 152 Applications Air Traffic control Aircraft Mission Control Command Control Systems Distributed Name Servers Manufacturing Plant Navigation and Positioning Systems automobiles, airplanes, ships, space station Network Management Systems
  • Slide 153
  • 153 Applications (2) Real-time Process Control Spacecraft Control Telecommunication Cellular phones Normal PBX Training Simulation System Pilot training Battlefield training
  • Slide 154
  • 154 Air Traffic Control Multiple control centers Controls airspace Terminal areas Enroute airspace Database aircraft: aircraft identification, transponder, code, altitude, position, speed, etc. flight plans: origin, route, destination, clearances, etc. environment data: airport configuration, navigation facilities, airways, restricted areas, notifications, weather data, etc.
  • Slide 155
  • 155 Air Traffic Control (2) Contents and Size of DB 350 airports, 250 navigation facilities, and 1500 air- crafts weather data, etc. DB size: ~20,000 entities Time requirements mean latency: 0.5 ms max latency: 5 ms external consistency: 1 sec temporal consistency: 6 secs permanence requirements: 12 hours
  • Slide 156
  • 156 Military Aircraft Mission Control (contd) Database: Tracking information 2000 air vehicles 250 ground entities, e.g., vehicles, airports, radars, etc. flight plan, maps, intelligence etc. DB size: ~3000 - 4000 entities
  • Slide 157
  • 157 Military Aircraft Mission Control Time requirements mean latency: 0.05 ms max latency: 1 - 25 ms external consistency: 25 ms temporal consistency: 25 ms permanence req.: 4 hours
  • Slide 158
  • 158 Training Simulation System Simulation of a real environment Purpose: tools for training personnel use of equipment,.e.g, pilot training strategy/tactics, e.g., simulated battlefield training
  • Slide 159
  • 159 Training Simulation System (contd) DB two separate DBs static DB (e.g., maps etc) constructed off-line read-only access dynamic DB (e.g., vehicles) tot. DB size: ~100,000 entities Time requirements mean latency: 0.5 ms max latency: < 5 ms external consistency: N/A temporal consistency: (0.25 sec) permanence req.: 5 hours
  • Slide 160
  • 160 Integrated Automobile Control TCM - Transmission Control Module TCS - Traction Control System CBC - Corner Braking Control DCS - Dynamic Safety Control ESP - Electronic Stabilization Program Car Diagnosis Systems Hard and soft TCs Significant interaction with external environment Distributed
  • Slide 161
  • 161
  • Slide 162
  • 162 VI. Real-Time Database Rsearch Prototype: BeeHive System
  • Slide 163
  • 163 Commercial RTDBs Polyhedra http://www.polyhedra.com/ Tachys, (Probita) http://www.probita.com/tachys.html ClustRa http://www.clustra.com DBx Eaglespeed (Lockheed-Marthin) RTDBMS (NEC) (Mitsubishi)
  • Slide 164
  • 164 RTDBS Research Projects BeeHive University of Virginia, USA DeeDS University of Skvde, Sweden Rodain University of Helsinki, Finland RT-SORAC University of Rhode Island, USA MDARTS University of Michigan, USA STRIP Stanford University, USA
  • Slide 165
  • 165 BeeHive: Global Multimedia Database Support for Dependable, Real-Time Applications Real-Time Systems Group Dept of Computer Science University of Virginia
  • Slide 166
  • 166 Applications of BeeHive Real-Time Process Control hard deadlines, main memory, need atomicity and persistence limited or no (i) schema, (ii) query capability Agile Manufacturing Business Decision Support Systems information dominance Intelligence Community Global Virtual Multimedia Real-Time DBs
  • Slide 167
  • 167
  • Slide 168
  • 168 Transaction Deadlines in BeeHive Hard - deadline must be met else catastrophic result suitable for some RTDB, in which timing constraints must be guaranteed, and the system supports predictability for certain guarantees Firm - deadline must be met else no value to executing transaction (just abort) Soft - deadline should be met, but if not, continue to process until complete
  • Slide 169
  • 169 Absolute Validity Interval (AVI) and Relative Validity Interval (RVI) 10 20 X Y Absolute Validity Interval (X) = 10 Absolute Validity Interval (Y) = 20 Relative Validity Interval X-Y < 15
  • Slide 170
  • 170 Data in BeeHive Data from sensors (including audio/video) Derived data Time-stamped data Absolute consistency - environment and DB Relative consistency - multiple pieces of data Schema and meta data User objects (with semantics) Pre-declared transactions (with semantics)
  • Slide 171
  • 171 Global Virtual Databases - BeeHive Dynamically reconfigure and collect DBs (Tailored for Some Enterprise) Interact with External DBs Utilize Distributed Execution Platforms Properties Real-Time QoS Fault Tolerance Security
  • Slide 172
  • 172 BeeHive System Search RDBMS OODB RAW BWBW BWBW BWBW BWBW BeeHive System Native BeeHive Sites BeeHive Wrapper
  • Slide 173
  • 173 BeeHive Object Model BeeHive Object is specified by N, the object ID A, set of attributes (name, domain, values) value -> value and validity interval semantic information M, set of methods name and location of code, parameters, execution time, resource needs, other semantic information CF, compatibility function T, timestamp
  • Slide 174
  • 174 BeeHive Transactions BeeHive transaction is specified by TN, unique ID XT, execution code I, importance RQ, set of requirements (for each of RT, QoS, FT, and security) and optional pre and post conditions P, policy for tradeoffs Example: if all resources cannot be allocated reduce FT requirement from 3 to 2 copies.
  • Slide 175
  • 175 Dealing With Soft/Firm RT Transactions Resolve inter-transaction conflicts in time cognizant manner (concurrency control) Assign transaction priorities (cpu scheduling)
  • Slide 176
  • 176 Goals Maximize the number of TRs (sensor and user) that meet deadlines Keep data temporally valid on overload allow invalid intervals on data (note that data with invalid interval may not be used during that invalid time)
  • Slide 177
  • 177 The External Interface Raw Data Structured Data Data manipulation BeeHive integration BeeHive Cogency Monitor Returned data Object Data maintained by cogency monitor (external to BeeHive) Internet Open Databases Unstructured Data
  • Slide 178
  • 178 Taxonomy of external data Structured (databases) Unstructured (search engines) Raw (video, audio, sensors)
  • Slide 179
  • 179 Cogency Monitor Support value added services RT, FT, QoS, and Security Execute client supplied functionality Map incoming data into BeeHive objects Monitor the incoming data for correctness and possibly make decisions based on the returned data not just a firewall
  • Slide 180
  • 180 Cogency Monitor GUI Pick and choose from template (value added services) Upon one choice - compatibility function executes that limits other choices Identify added functionality correctness mapping to internal BeeHive objects Automatic generation of the Cogency Monitor generated automatically using library functions (not implemented yet)
  • Slide 181
  • 181 Value Added Services of Cogency Monitor
  • Slide 182
  • 182 Unstructured cogency monitor
  • Slide 183
  • 183 Basic BeeHive Storage Manager Security RTDB Internals Admission Control RT Threads Database BeeHive Front End Java Cogency Monitor Expand DB Simulation Current Research Activities in BeeHive
  • Slide 184
  • 184 Summary: BeeHive Project Global real-time database system object-based with added object semantics support in RT, FT, QoS, and Security different types of data: video, audio, images and text sensors and actuators Novel component technology data deadline, forced delay, conditional priority inheritance real-time logging and recovery security-performance tradeoff resource models for admission control Cogency Monitor
  • Slide 185
  • 185
  • Slide 186
  • 186 VII. Trends, Challenges, & Research Issues
  • Slide 187
  • 187 Trends and Challenges Current state Research focused on soft or firm RT transactions with flat transaction structure on centralized database systems Deals with alpha-numeric data with read/write operations Trends Applications are becoming large, complex, and distributed Operate in a highly dynamic environment, yet requires predictable performance Need to support multimedia applications Need to support more active features such as triggering mechanisms
  • Slide 188
  • 188 Trends and Challenges (2) Trends Composable real-time components architecture: how do components relate abstractions/encapsulations of components dynamic scheduling and resource management tools for estimating/determining characteristics Large-scale distributed applications predictable execution even with changing network delays Testing and validation techniques People in industry want to use COTS!!
  • Slide 189
  • 189 Trends and Challenges (3) Changing operating environments Embedded Systems every embedded OS will require real-time scheduling and some form of real-time data management support massive volumes ubiquitous and pervasive computing is everywhere Networks convergence of telecom and data networks exponentially-growing network services and applications WWW, E-commerce, must deal with proxies and cache to support different QoS requirements
  • Slide 190
  • 190 Trends and Challenges (4) Requirements and Challenges ability to perform increasingly complex functions light, small, and reliable heterogeneity efficient resource management integration with other types of requirements (security and fault-tolerance, ) features to be unbundled such that only necessary functions can be selected for specific applications
  • Slide 191
  • 191 Future Research Areas that Affect RTDB Research Interface and component libraries open interfaces dealing with semantic mismatches micro and macro components Real-time data/information centric view as opposed to task centric view currently used Adaptive scheduling and decision making based on changing situation and incomplete workload and component profiles Component-based tool sets Configuration tools Tools to specify and integrate requirements of real-time and fault- tolerance
  • Slide 192
  • 192 Future Research Areas that Affect RTDB Research (2) New Requirements Complex software must evolve Software must be portable to other platforms develop once verify once certification and verification is very expensive port and integration should be automatic Flexible real-time data support one-size-fits-all does not work: small with minimal functionality for embedded systems while complete and full functionality for back-end server applications
  • Slide 193
  • 193 Future Research Areas in RTDBS Resource management and scheduling temporal consistency guarantee (especially relative validity intervals) interactions between hard and soft/firm RT transactions transient overload handling I/O and network scheduling models which maximizes both concurrency and resource utilization support of different transaction types for flexible scheduling: Alternative, Compensating, Imprecise Recovery availability (partial) of critical data during recovery semantic-based logging and recovery
  • Slide 194
  • 194 Future RTDBS Research Areas (2) Concurrency Control alternative correctness models (relaxing ACID properties) integrated and flexible schemes for concurrency control Fault tolerance and security models to interact with RTDBS Query languages for explicit specification of real-time constraints -> RT-SQL Distributed real-time databases commit processing distribution/replication of data recovery after site failure predictable (bounded) communication delays
  • Slide 195
  • 195 Future RTDBS Research Areas (3) Data models to support complex multimedia objects Schemes to process a mixture of hard, soft, and firm timing constraints and complex transaction structures Support for integrating requirements of security and fault-tolerance with real-time constraints Performance models and benchmarks Support for more active features in real-time context techniques for bounding time in event detection, rule evaluation, rule processing mode, etc. associate timing constraints with triggering mechanisms Interaction with legacy systems (conventional databases)
  • Slide 196
  • 196
  • Slide 197
  • 197 VIII. Exercises
  • Slide 198
  • 198 Exercise (1) Suppose we have periodic processes P 1 and P 2, which measure pressure and temperature, respectively. The absolute validity interval of both of these parameters is 100 ms. The relative validity interval of a temperature-pressure pair is 50 ms. What is the maximum period of P 1 and P 2 that ensures that the database system always has a valid temperature-pressure pair reading?
  • Slide 199
  • 199 Exercise (2) Sometimes a transaction that would have been aborted under the two- phase locking protocol can commit successfully under the optimistic protocol. Why is that? Develop a scenario in which a case of such transaction execution occurs.
  • Slide 200
  • 200 Exercise (3) Explain why EDF does not work well in a heavily loaded real-time database systems, and propose how you can improve the success rate by adapting EDF. Will your new scheme work as well as EDF in lightly loaded database systems? Will it work well in real-time applications other than database systems?
  • Slide 201
  • 201 Exercise (4) Generate examples of an application where it is permissible to relax one or more ACID properties of transactions in real-time database systems.
  • Slide 202
  • 202 Exercise (5) Suppose a transaction T has a timestamp of 100. Its read-set has X 1 and X 2, and its write-set has X 3, X 4, and X 5. The read timestamps of these data objects are (prior to adjustment for the commitment of T) 5, 10, 15, 16, and 18; their write timestamps are 90, 200, 250, 300, and 5, respectively. What should be the read and write timestamps after the successful commitment of T? Will the value of X 3, X 4, and X 5 will be changed when T commits?
  • Slide 203
  • 203 Exercise (6) Why the concurrency control protocols used in conventional database systems are not very useful for real-time database systems? What are the information that can be used by real-time database schedulers?
  • Slide 204
  • 204 Exercise (7) Compare pessimistic and optimistic approaches in concurrency control when applied to real-time database systems. Discuss different policies in optimistic concurrency control and their relative merits.
  • Slide 205
  • 205 Exercise (8) Are the techniques developed for real-time operating systems schedulers directly applicable to real-time database schedulers? Why or why not?
  • Slide 206
  • 206 Exercise (9) Discuss design criteria for real-time database systems that are different from those of conventional database systems. Why conventional recovery techniques based on logging and checkpointing may not be appropriate for real-time database systems?
  • Slide 207
  • 207 Exercise (10) What are the problems in achieving predictability in real-time database systems? What are the limitations of the transaction classification method we discussed in this course to support predictability?