Seismic Resolution

Seismic resolution and fidelity are the two important measures of the quality of the seismic record and the seismic images. Seismic resolution quantifies the level of precision, such as the finest size of the subsurface objects detectable by the seismic data whereas the seismic fidelity quantifies the truthfulness such as the genuineness of the data or the level to which the imaged target position matches its true subsurface position.

Let us try to understand this by making a synthetic data and doing the analysis over it.

Seismic Resolution Analysis: 1

Estimation of the Width of the Fresnel’s Zone

The Fresnel’s resolution quantifies the resolvability of seismic wave perpendicular to the direction of wave propagation. Fresnel’s resolution is defined as the width of the first Fresnel’s zone due to interference of the spherical waves from the source and from the receiver.

Seismic Resolution Analysis: 2

Fidelity

To investigate the fidelity of our data, let us consider the technique of resampling. For our case we consider the method of “bootstrapping”. Bootstrapping basically relies on random sampling with replacement. The other popular method for resampling is “jackknifing” which predates “bootstrapping”. The jackknife estimator of a parameter is found by systematically leaving out each observation from a dataset and calculating the estimate and then finding the average of these calculations.

The principle behind “bootstrapping” is that a dataset is taken, the total dataset is divided into two by randomly sampling with replacement. The newly sampled data are now used to invert for the model using some kernel function. If the two models correlate high enough then we can say that the prominent features in the model come from consistent signals in the data.

We don’t have the data set to make the velocity model. So instead, we can take random Gaussian distribution data and play with it.

Let’s pose the null hypothesis that the two sets of data come from the same probability distribution (not necessarily Gaussian). Under the null hypothesis, the two sets of data are interchangeable, so if we aggregate the data points and randomly divide the data points into two sets, then the results should be comparable to the results obtained with the original data. So, the strategy is to generate random datasets, with replacement (bootstrapping), compute difference in means (or difference in medians or any other reliable statistic), and then compare the resulting values to the statistic computed from the original data.

Seismic Resolution Analysis: 3

wave_propagation

wave_propagation_modelling

—Utpal Kumar (IES,  Academia Sinica)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Powered by WordPress.com.

Up ↑

%d bloggers like this: