of 43

• date post

11-Apr-2015
• Category

## Documents

• view

7.132

2

Embed Size (px)

### Transcript of umesh final dissertation

2 SEISMIC DATA PROCESSINGAlteration of seismic data to suppress noise, enhance signal and migrate seismic events to the appropriate location in space is termed as Seismic Processing. It facilitates better interpretation because subsurface structures and reflection geometries are more apparent. 2.1 OBJECTIVES To obtain a representative image of the subsurface. Improve the signal to noise ratio: e.g. by measurement of several channels

and stacking of the data (white noise is suppressed). Present the reflections on the record sections with the greatest possible

resolution and clarity and the proper geometrical relationship to each other by adapting the waveform of the signals. Isolate the wanted signals (isolate reflections from multiples and surface waves). Obtain information about the subsurface (velocities, reflectivity etc.). Obtain a realistic image by geometrical correction. Conversion from travel time into depth and correction from dips and diffractions

2

2.2 PREPROCESSINGPreprocessing is the first and important step in the processing sequence and it commences with the reception of field tapes and observers log .Field tape contains seismic data and observers contains geographical data (shot/receiver number, picket number, latitude and longitude etc). 2.3 DEMULTIPLEXING Field tapes customarily arrive at the processing center written in multiplexed format (time sequential) because that is the way generally the sampling is done in field. In general the early stages of processing require channel ordered or trace ordered data. Demultiplex is therefore done to convert the time sequential data into trace sequential data. Mathematically, Demultiplexing is seen as transposing a big matrix so that the column of the resulting matrix can be read as seismic traces recorded at different offsets with a common shot pint. At this stage, the data are converted in a convenient format that is used throughout the processing. This format is determined by the type of the processing system and individual company. A common format used in seismic industry for data exchange is SEG-Y, established by the society of exploration geophysicists. Nowadays demultiplexing is done in the field. 2.4 REFORMATTING The formats generally used for data recording are SEG-D (Demultiplexed data) and SEG-B (Multiplexed data). Hence they are called field formats. Demultiplexed is done on data recorded in SEG-D format. In this stage the data are3

converted to a convenient format, which is used throughout processing. There are many standards available for data storage. Format differs with the manufarcturer, type of recording instrument and also with the version of operating system. 2.5 FIELD GEOMETRY SET UP Field geometry is created with the help of information provided by field party. That is as follows. 1. Survey information (I) X and Y coordinate of shot/vib. Points. (II) Elevation of geophone/shot points 2. Recording instrument (I) Record file numbers (II) Shot interval, group interval, near offset and far offset (III) Layout, no. of channels, foldage. 3. Processing information (I) Datum statics (II) Near surface model (III) Datum plane elevation 2.6 EDITING Edit traces, which consist of extremely noisy traces and muting the firstarrivals on all traces. Traces from poorly planted geophones may show sluggishness and introduce low frequency and sometimes cause spiky amplitudes and therefore degrade a CMP stack. These traces are identified during manual inspection/editing phase of all the shot records and flagged in the header so that they will not be included (they are killed) in processing steps and in display.

4

Traces so noisy that they dont visually correlate with strong arrivals on adjacent traces should be killed. We have to be conservative in trace killing because the fold of this data is low and eliminating only a few traces may have noticeable effect on the stacked traces. Editing involves leaving out the auxiliary channels & NTBC traces and detecting and changing dead or exceptionally noisy traces. Bad data may be replaced with interpolated values. Noisy traces, those with static glitches or monofrequency high amplitude signal levels are deleted. Polarity reversals are corrected. Output after editing usually includes a plot of each file so that one can see what data need further editing and what type of noise attenuation are required.

Fig.2.1.1 (a) before editing

(b) after editing

2.7 SPHERICAL DIVERGENCE CORRECTION A single shot can be thought of a point source which gives rise to a spherical wave field. There are many factors which affect the amplitude of this wave field as it propagates through the earth.5

Two important factors which have major effect on a propagating wave field are spherical divergence and absorption. Spherical divergence causes wave amplitude to decay as 1/r, where r is the radius of the wave front. Absorption results in a change of frequency content of the initial source signal in a timevariant manner, as it propagates. Since earth behaves as a low pass filter so high frequencies are rapidly absorbed.There are some programmes used for gain-AGC, PGC, geometric spreading correction 2.8 STATIC CORRECTION When the seismic observations are made on non flat topography, the observed arrival times will not depict the subsurface structures. The reflection time must be corrected for elevation and for the changes in the thickness of the weathering layer with respect to flat datum. The former correction removes difference in travel time due to variation of surface elevation of the shot and receiver location. The weathering corrections remove differences in travel time to the near surface zones of unconsolidated low velocity layer which may vary thickness from place to place. These are also called static corrections, as they do not change with time. The static corrections are computed taking into account the elevation of the source and receiver locations with respect to seismic reference datum (such as Mean Sea Level), velocity information in the weathering and sub weathering layers. Often, special surveys (up hole surveys, shallow refraction studies) precede the conventional acquisition to obtain the characteristics of the low velocity layer.2.9 TRACE BALANCING

To bring all the input data amplitudes in a specific range (necessary for display), amplitude scaling is done. A separate balance factor is computed for and6

applied to each trace individually. Now days, surface consistent amplitude balancing is in use. 2.10 MAIN PROCESSING Main processing starts. It includes three major steps. They are as follows: 1. DECONVOLUTION 2. STACKING 3. MIGRATION

Fig 2.1.2 Seismic data volume represented in processing coordinate: midpoint- offset-time (After z Yilmaz, etal 2001) Deconvolution acts on the data along time axis and increase temporal

resolution. Stacking compresses the data volume in the offset direction and yields the

plane of stack section (the frontal face of the prism) Migration then move dipping events to their true subsurface position and

collapses diffraction and thus increases lateral resolution.

7

2.11 DECONVOLUTION Deconvolution is a process that improves the temporal resolution of seismic data by compressing the basic seismic wavelet. The need for Deconvolution In exploration seismology the seismic wavelet generated by the source travels through different geologic strata to reach the receiver. Because of the many distorting effects encountered the wavelet reaching the receiver is by no means similar to the wave propogation by source. Objective of deconvolution Shorten reflection wavelets Attenuate ghost , instrument effects , reverberation and multiple reflection The convolutional model for deconvolution(I)

The earth is made up of horizontal layers of constant velocity.

(II) The source generates a compressional plane wave that impinges on layer boundaries at normal incidence. (III) The source wave form does not change as it travels in the surface. (IV) The noise component n(t) is zero. (V) The source waveform is known. (VI) Reflectivity is a random series. (VII) Seismic wavelet is minimum phase.

8

There are two type Deconvolution 1) Deterministic Deconvolution Deconvolution where the particular of the filter whose effects are to be removed are known ,is called deterministic Deconvolution .The source wave shape is sometime recorded and used in a deterministic source signature correction .No random are involved for example where source wavelet accurately known ,we can do source signature Deconvolution.2) Statistical Deconvolution

is

A statistical Deconvolution need to derive information about the wavelet from the data itself where no information is available about any component of the model .Statistical deconvolution is applied without prior application of deterministic deconvolution in the case if land data taken with an explosive source. In addition we make certain assumption about the data which justifies the statistical approach There are two type of statistical deconvolution (I) Spiking Deconvolution The process by which the seismic wavelet is compressed into a zero lag spike is called Spiking deconvolution (II) Predictive Deconvolution The process uses prediction distance greater than unity and yields a wavelet of finite duration instead of a spike. This is helpful in suppressing multiples Deconvolution parameter Deconvolution can give best results only when accurate parameters are chosen. Parameters associated with Predictive Deconvolution are:9

(I) Operator Length-The total Operator Length is the sum of the Prediction operator length (POL) and the Prediction distance (PD). The Deconvolution is ineffective if the POL is too short. Typically the prediction operator should exceed two or three times the d