Lecture 01

33
Lecture-01: Measurements Basic Concepts IN AN EXPERIMENT WE ARE CONCERNED WITH QUANTIFYING THE MEASUREMENT AND ITS REPRESENTATION MUST BE CONSISTENT WITH A CERTAIN UNIT OF MEASUREMENT.

description

Lecture 01

Transcript of Lecture 01

  • Lecture-01: Measurements Basic ConceptsIN AN EXPERIMENT WE ARE CONCERNED WITH QUANTIFYING THE MEASUREMENT AND ITS REPRESENTATION MUST BE CONSISTENT WITH A CERTAIN UNIT OF MEASUREMENT.

  • Lets define some metrological terms!Readability: This term indicates the closeness with which the scale of the instrument may be read.

    Example: An instrument with a 12-in scale would have a higher readability than an instrument with a 6-in scale.

    Least count: The smallest difference between two indications that can be detected on the instrument scale. Measured values are good only up to this value. The least count error is the error associated with the resolution of the instrument.

    Resolution: The smallest increment in input which generates, on average, a measurable output

    Example: If we use a meter scale for measurement of length, it may have graduations at 1 mm division scale spacing or interval.

    Instruments of higher precision can reduce the least count error. By repeating the observations and taking the arithmetic mean of the result, the mean value would be very close to the true value of the measured quantity, thus increasing its accuracy.

  • Lets define some metrological terms! (cont)Both readability and least count are dependent on scale length, spacing of graduations, size of pointer

    For digital instruments one is concerned with the significant digits on the display of the particular instrument.

    Sensitivity: The ratio of the linear movement of the pointer on an analog instrument to the change in the measured variable causing this motion. Its the local slope of the input (25cm) vs the output (1mV) curve.

    Example: A pressure sensor will have a sensitivity of X V/mmHg.

    For a linear sensor, that slope is independent on the input value.

  • Lets define some metrological terms! (cont)So, typically, a sensor will have a sensitivity, but an instrument will have a resolution.

    Now, when your gauge (or measurement instrument) measures directly in "final units", then sensitivity and resolution are basically the same thing (and usually, the term resolution is used).

    In the other case, usually, the term sensitivity is used (because the resolution depends on the display behind the sensor, and so is not relevant). A mass scale may have increments of 1g, but the indicator may change with less than 1g input.

    Example: The strain gauge sensor inside a scale will be described in terms of sensitivity (V/mg). But the scale itself will be described in terms of resolution (1mg). A thermocouple will have a sensitivity (V/C), but a thermometer will have a resolution (0.05C).

    Repeatability: The capability of an instrument to give the same output among repeated inputs of the same value over a period of time.

  • Lets define some metrological terms! (cont)Scale Interval: The difference between two successive scale marks in units of the measured quantity. (In the case of numerical indication, it is the difference between two consecutive numbers).

    The scale interval is an important parameter that determines the ability of the instrument to give accurate indication of the value of the measured quantity. The scale spacing, or the length of scale interval, should be convenient, for estimation of fractions.

    Discrimination: It is the ability of the measuring instrument to react to small changes of the measured quantity.

    Hysteresis: The difference between the indications of a measuring instrument when the same value of the measured quantity is reached by increasing or by decreasing that quantity. It also results in the pointer not returning completely to zero when the load is removed.

  • Lets define some metrological terms! (cont)Accuracy: Indicates the deviation of the reading from a known input. Accuracy is frequently expressed as a percentage of full-scale reading, so that a 100-kPa pressure gage having an accuracy of 1 percent would be accurate within 1 kPa over the entire range of the gage.

    Accuracy is mentioned as relating the deviation of an instrument reading from a known value. The deviation is called the error.

    Precision: Indicates its ability to reproduce a certain reading with a given accuracy. Accuracy can be improved up to but not beyond the precision of the instrument by calibration.

  • Lets define some metrological terms! (cont)Calibration: A comparison of two measurement devices or systems, one of known uncertainty (standard) and one of unknown uncertainty (your test equipment or instrument). The calibration establishes the accuracy of the instrument. All test equipment that make a quantitative measurement require calibration from time to time. Simply put, calibration is the process of checking the dimensions. and tolerances of a gauge, or the accuracy of a measurement instrument by comparing it to a like instrument/gauge that has been certified as a standard of known accuracy.

    Certification: Is performed prior to use of instrument/gauge and later to re-verify if it has been reworked so that it again meets its requirements. Certification is given by a comparison to a reference standard whose calibration is traceable to an accepted national standard. Further such reference standards must have been certified and calibrated as master not more than six months prior to use.

  • Lets define some metrological terms! (cont)Example: Calibrate load cells by placing a known weight and reading the measurement (force) from the load cell.

    Without calibration, or by using incorrect calibration procedures, we pay more at the gas pump, for food weighed incorrectly at the checkout counter, and for manufactured goods that do not meet their stated specifications. Therefore, calibration procedures involve a comparison of the particular instrument with either (1) a primary standard, (2) a secondary standard with a higher accuracy than the instrument to be calibrated, or (3) a known input source.

    Uncertainty: Uncertainty is an estimate of the limits, at a given confidence level, which contain the true value.

    Confidence levels: It is the measure of a degree of reliability with which the results of measurement can be expressed. Thus if u be uncertainty in a measured quantity x at 98% confidence level, then the probability for true value to lie between x + u and x u is 98%. Thus on measuring this quantity a large number of times, then 98% of the values will lie in between x + u and x u.

  • Lets define some metrological terms! (cont)Stability: Often referred to also as drift, stability is expressed as the change in percentage in the calibrated output of an instrument over a specified period, usually 90 days to 12 months, under normal operating conditions. Drift is usually given as a typical value.

    Tolerance: A design feature that defines limits within which a quality characteristic is supposed to be on individual parts. It represents the maximum allowable deviation from a specified value. Tolerances are applied during design and manufacturing stage of a product.

    Traceability: The property of a measurement or the value of a standard whereby it can be related to stated references, usually national or international standards, through a valid chain of calibrations all having stated uncertainties.

    Validation: Validation of measurement and test methods (procedures) is generally necessary to demonstrate that the methods are suitable for the intended use.

  • Lets define some metrological terms! (cont)Bias: The characteristic of a measure or a measuring instrument to give indications of the value of a measured quantity whose average differs from the true value of that quantity. Bias error is due to the algebraic summation of all the systematic errors affecting the indication of the instrument. Sources of bias are maladjustment of the instrument, permanent set, non-linearly errors, errors of material measures

    Inaccuracy: The total error of a measure or measuring instrument under specified conditions of use and including bias and repeatability errors. Inaccuracy is specified by two limiting values obtained by adding and subtracting to the bias error the limiting value of the repeatability error.

    If the known systematic errors are corrected, the remaining inaccuracy is due to the random errors and the residual systematic errors that also have a random character. This inaccuracy is called the uncertainty of measurement.

    Error: The difference between the mean of set of readings on same component and the true value. Less is the error, more accurate is the instrument.

  • Lets define some metrological terms! (cont)A- To achieve a higher accuracy an instrument needs to possess:

    1- Great sensitivity. 2- Great consistency, readings obtained for a given quantity should be same all the time. 3- Possibility to calibrate it. In such an instrument, the errors will be constant at any given reading to allow for calibration.

    B- Thus the greater the accuracy is aimed at, greater the number of sources of errors to be investigated and controlled.

    C- As far as possible, the errors should be capable of elimination by adjustment contained within the instrument itself.

    D- Every important source of inaccuracy should be known.

    E- When an error cant be eliminated, it should be made as small as possible.

  • Lets define some metrological terms! (cont)* As far as possible the principle of similarity must be followed, i.e. the quantity to be measured must be similar to that used for calibrating the instrument. Further the measuring operations performed on the standard and on the unknown must be as identical as feasible and under the same physical conditions (environment temperature, etc. and using the same procedures in all respects in both the cases of calibration and measurement).

    * Accuracy in measurement is essential at all stages of product development from research to development and design, production, testing and evaluation, quality assurance, standardisation, on-line control, operational performance appraisal, reliability estimation, etc.

    * The last word in connection with accuracy is that the accuracy at which we aim, that is to say, the trouble we take to avoid errors in manufacture and in measuring those errors during inspection must depend upon the job itself and on the nature of what is required, i.e. we should make ourselves very sure whether we want that much accuracy and that cost to achieve it will be compensated by the purpose for which it is desired.

  • Lets define some metrological terms! (cont)* The accuracy of measuring system includes elements such as:

    1- Calibration standards: it may be affected by ambient influences (thermal expansion), stability with time, elastic properties, geometric compatibility, and position of use.

    2- Work piece being measured: may be affected by ambient influences, cleanliness, surface condition, elastic properties, geometric truth, arrangement of supporting it, provision of defining datum etc.

    3- Measuring instruments: may be affected by hysteresis, backlash, friction, zero drift error, deformation in handling or use of heavy work pieces, inadequate amplification, errors in amplification device.

    4- Person or inspector carrying out the measurement: can be many and mainly due to improper training in use and handling, skill, sense of precision and accuracy appreciation, proper selection of instrument, attitude towards and realization of personal accuracy achievements, etc.

    5- Environmental influences: temperature; thermal expansion effects due to heat radiation from light, heating of components by sunlight and people, temperature equalization of work, instrument and standard ; surroundings; vibrations ; lighting; pressure gradients (affect optical measuring systems) etc.

  • Lets define some metrological terms! (cont)Standards: In order that investigators in different parts of the country and different parts of the world may compare the results of their experiments on a consistent basis, it is necessary to establish certain standard units of length, weight, time, temperature, and electrical quantities.

    Unit: Is any measurement that there is 1 of. The SI system is the preferred unit of measurement, although the English is more popular in the US.

    Dimensions: Is a physical variable used to specify the behavior or nature of a particular system. Almost every unit can be traced to the following dimensions: L = length, M = mass, F = force, = time & T = temperature.

    1 meter = 39.37 inches, 1 pound-mass = 453.59237 grams, 1 inch = 2.54 centimeters, K = C + 273.15, I British thermal unit (Btu) = 778.16 lbf.ft = 1055 J, 1 newton 1 kilogram-meter per second squared.

  • The generalized measurement system * Most measurement systems may be divided into three parts:

    1- A detector-transducer stage, which detects the physical variable and performs either a mechanical or an electrical transformation to convert the signal into a more usable form. In the general sense, a transducer is a device that transforms one physical effect into another. In most cases, however, the physical variable is transformed into an electric signal because this is the form of signal that is most easily measured.

    2- Some intermediate stage, which modifies the direct signal by amplification, filtering, or other means so that a desirable output is available. If a signal is to weak, it may be amplified or if its measured in a noisy environment with interference from other sources, it may be filtered.

    3- A final or terminating stage, which acts to indicate, record, or control the variable being measured. The output may also be digital or analog. This may be referred to as data acquisition and the conversion of the signal from analog to digital using an A/D board so that the computer may manipulate the data.

  • The generalized measurement system (cont)

  • Dynamic measurement* A static measurement of a physical quantity is performed when the quantity is not changing with time. The deflection of a beam under a constant load would be a static deflection. However, if the beam were set in vibration, the deflection would vary with time, and the measurement process might be more difficult.

    * The measurement of physical quantity using a sensor under dynamic conditions, changing with time can be described with the following Zeroth-, First-, and Second-Order Systems.

    * A system may be described in terms of a general variable x(t) written in differential equation form as:

    * Where F(t) is some forcing function imposed on the system. The order of the system is designated by the order of the differential equation.

  • Dynamic measurement (cont)* A zeroth-order system would be governed by:

    * A first-order system by:

    * And a second-order system by:

  • Dynamic measurement (cont)* A first order system describes such systems as thermal and capacitors discharging through a resistor, braking of an auto, flow of fluid from a tank, cooling of a cup. The have one state variable.

    * A second order system describes systems such as a spring-mass damper or electric systems with inductance. These system have two independent state variable which may represent independent energy storage elements such as exchange of energy between mass and stiffness and capacitors and inductors, etc.

    * Engineers often use second-order system models in the preliminary stages of design in order to establish the parameters of the energy storage and dissipation elements required to achieve a satisfactory response.

    * We shall examine the behavior of these three types of systems to study some basic concepts of dynamic response. We shall also give some concrete examples of physical systems which exhibit the different orders of behavior.

  • Advanced system dynamics and control* Review of First- and Second-Order System Response: First-Order Linear System Transient Response:

    * The dynamics of many systems of interest to engineers may be represented by a simple model containing one independent energy storage element. For example, the braking of an automobile, the discharge of an electronic camera flash, the flow of fluid from a tank, and the cooling of a cup of coffee may all be approximated by a first-order differential equation, which may be written in a standard form as:

  • Advanced system dynamics and control (cont)* Where the system is defined by the single parameter , the system time constant, and f(t) is a forcing function. For example, if the system is described by a linear first-order state equation and an associated output equation:

    * And the selected output variable is the state-variable, that is y(t) = x(t), Eq. (3) may be rearranged:

  • Advanced system dynamics and control (cont)* And rewritten in the standard form (in terms of a time constant = -1/a), by dividing through by -a:

    * Where the forcing function is f(t) = (-b/a)u(t). If the chosen output variable y(t) is not the state variable, Eqs. (2) and (3) may be combined to form an input/output differential equation in the variable y(t):

  • Advanced system dynamics and control (cont)* To obtain the standard form we again divide through by -a:

    * Comparison with Eq. (1) shows the time constant is again = -1/a, but in this case the forcing function is a combination of the input and its derivative:

    * In both Eqs. (5) and (7) the left-hand side is a function of the time constant = -1/a only, and is independent of the particular output variable chosen.

  • Advanced system dynamics and control (cont)* Time constants of some typical first-order systems.

  • Advanced system dynamics and control (cont)* Time constants of some typical first-order systems.

  • Advanced system dynamics and control (cont)The standard form of the homogeneous first-order equation, found by setting f(t)=0 in Eq. (1), is the same for all system variables:

    And generates the characteristic equation:

  • Advanced system dynamics and control (cont)Which has a single root,

    The system response to an initial condition y(0) is:

    Using both the normalized time and also a normalized response magnitude y(t)/y(0):

  • Advanced system dynamics and control (cont)

  • Advanced system dynamics and control (cont)

  • Example: Time constant for a fluid systemA water tank with vertical sides and a cross-sectional area of 22 is fed from a constant displacement pump, which may be modeled as a flow source (t). A valve, represented by a linear fluid resistance = 10

    6N-s/5, at the base of the tank is always open and allows water to flow out. In normal operation the tank is filled to a depth of 1.0 m. At time t = 0 the power to the pump is removed and the flow into the tank is disrupted. If the flow through the valve is 106 3 when the pressure across it is 1 2, determine the pressure at the bottom of the tank as it empties. Estimate how long it takes for the tank to empty. The linear graph generates a state equation in terms of the pressure across the fluid capacitance Pc(t):

  • Example: Time constant for a fluid system (cont)Solution: The tank is represented as a fluid capacitance with a value:

    So = 2/(1000*9.81) = 2.04*104 5 and the standard first-order form is:

  • Example: Time constant for a fluid system (cont)The time constant is = = 204 seconds. When the pump fails the input flow is set to zero, and the system is described by the homogeneous equation:

    The homogeneous pressure response is

  • Example: Time constant for a fluid system (cont)At initial depth h = 1m, therefore the initial pressure Pc(0) = gh(0) = 1000 x 9.81 x 1 = 9810 2 .

    With these values the pressure at the base of the tank as it empties is

    The time for the tank to drain cannot be simply stated because the pressure asymptotically approaches zero. It is necessary to define a criterion for the complete decay of the response; commonly a period of t = 4 is used since y(t)/y(0) = 4< 0.02. In this case after a period of 4 = 816 seconds the tank contains less than 2% of its original volume and may be approximated as empty.