Research Article On a Deep Learning Method of Estimating...

13
Research Article On a Deep Learning Method of Estimating Reservoir Porosity Zhenhua Zhang, 1 Yanbin Wang, 1 and Pan Wang 2 1 College of Geoscience and Surveying Engineering, China University of Mining and Technology (Beijing), Beijing 100083, China 2 State Key Laboratory of Nuclear Resources and Environment, East China University of Technology, Nanchang 330013, Jiangxi, China Correspondence should be addressed to Pan Wang; [email protected] Received 3 December 2020; Revised 13 January 2021; Accepted 27 January 2021; Published 8 February 2021 Academic Editor: Rossana Dimitri Copyright © 2021 Zhenhua Zhang et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Porosity is an important parameter for the oil and gas storage, which reflects the geological characteristics of different historical periods. e logging parameters obtained from deep to shallow strata show the stratigraphic sedimentary characteristics in different geological periods, so there is a strong nonlinear mapping relationship between porosity and logging parameters. It is very important to make full use of logging parameters to predict the shale content and porosity of the reservoir for precise reservoir description. Deep neural network technology has strong data structure mining ability and has been applied to shale content prediction in recent years. In fact, the gated recurrent unit (GRU) neural network has further advantage in processing serialized data. erefore, this study proposes a method to predict porosity by combining multiple logging parameters based on the GRU neural network. Firstly, the correlation measurement method based on Copula function is used to select the logging parameters most relevant to porosity parameters. en, the GRU neural network is used to identify the nonlinear mapping relationship between logging data and porosity parameters. e application results in an exploration area of the Ordos basin show that this method is superior to multiple regression analysis and recurrent neural network method, which indicates that the GRU neural network is more effective in predicting a series of reservoir parameters such as porosity. 1. Introduction Porosity is an important physical property parameter reflecting the reservoir capacity. Accurate calculation of reservoir porosity is the key work in geological interpreta- tion and oil exploration and development. Each logging parameter carries porosity information to varying degrees, and the relationship between porosity and logging param- eters is a typical multiparameter nonlinear mapping rela- tionship. Making full use of various effective logging parameters to comprehensively predict porosity is of great significance to oil and gas exploration and development. Reservoir porosity is affected by many geological factors, such as burial depth, structural location, sedimentary en- vironment, lithology change, and diagenetic degree. From the perspective of rock geophysics, there is a typical non- linear relationship between reservoir porosity and logging parameters [1, 2]. Bakhorji et al. believe that the porosity parameter obtained by petrophysical analysis through core sampling is the most accurate [3], since then, researchers have done a lot of relevant research, and Tao et al. [4] re- alized the reconstruction of the pore-fracture system of different marcolithotypes. Tao et al. [5] constructed a continuous distribution model of pore space for coal res- ervoirs. Tao et al. [6] determined the pore and fracture system by using the low-field nuclear magnetic resonance technique. However, the cost of sampling and testing is too high for a large-scale industrial production application. In conventional logging interpretation, the quantitative cal- culation of porosity usually adopts the theoretical porosity model of density, acoustic time difference, and compensated neutron logging, or establishes the regional empirical po- rosity model combined with core analysis [7–9]. However, from the point of view of the interpretation model, pa- rameter selection, and mathematical processing method, it is difficult to establish a good mapping relationship between Hindawi Mathematical Problems in Engineering Volume 2021, Article ID 6641678, 13 pages https://doi.org/10.1155/2021/6641678

Transcript of Research Article On a Deep Learning Method of Estimating...

Page 1: Research Article On a Deep Learning Method of Estimating ...theporosityandcoreanalysisdata,andtheresultsaregreatly affected by human factors. Nuclear magnetic resonance (NMR) porosity

Research ArticleOn a Deep Learning Method of Estimating Reservoir Porosity

Zhenhua Zhang1 Yanbin Wang1 and Pan Wang 2

1College of Geoscience and Surveying Engineering China University of Mining and Technology (Beijing) Beijing 100083 China2State Key Laboratory of Nuclear Resources and Environment East China University of Technology Nanchang 330013Jiangxi China

Correspondence should be addressed to Pan Wang wangpan328yahoocom

Received 3 December 2020 Revised 13 January 2021 Accepted 27 January 2021 Published 8 February 2021

Academic Editor Rossana Dimitri

Copyright copy 2021 Zhenhua Zhang et al )is is an open access article distributed under the Creative Commons AttributionLicense which permits unrestricted use distribution and reproduction in any medium provided the original work isproperly cited

Porosity is an important parameter for the oil and gas storage which reflects the geological characteristics of different historicalperiods )e logging parameters obtained from deep to shallow strata show the stratigraphic sedimentary characteristics indifferent geological periods so there is a strong nonlinear mapping relationship between porosity and logging parameters It isvery important to make full use of logging parameters to predict the shale content and porosity of the reservoir for precisereservoir description Deep neural network technology has strong data structure mining ability and has been applied to shalecontent prediction in recent years In fact the gated recurrent unit (GRU) neural network has further advantage in processingserialized data )erefore this study proposes a method to predict porosity by combining multiple logging parameters based onthe GRU neural network Firstly the correlation measurement method based on Copula function is used to select the loggingparameters most relevant to porosity parameters )en the GRU neural network is used to identify the nonlinear mappingrelationship between logging data and porosity parameters)e application results in an exploration area of the Ordos basin showthat this method is superior to multiple regression analysis and recurrent neural network method which indicates that the GRUneural network is more effective in predicting a series of reservoir parameters such as porosity

1 Introduction

Porosity is an important physical property parameterreflecting the reservoir capacity Accurate calculation ofreservoir porosity is the key work in geological interpreta-tion and oil exploration and development Each loggingparameter carries porosity information to varying degreesand the relationship between porosity and logging param-eters is a typical multiparameter nonlinear mapping rela-tionship Making full use of various effective loggingparameters to comprehensively predict porosity is of greatsignificance to oil and gas exploration and development

Reservoir porosity is affected by many geological factorssuch as burial depth structural location sedimentary en-vironment lithology change and diagenetic degree Fromthe perspective of rock geophysics there is a typical non-linear relationship between reservoir porosity and loggingparameters [1 2] Bakhorji et al believe that the porosity

parameter obtained by petrophysical analysis through coresampling is the most accurate [3] since then researchershave done a lot of relevant research and Tao et al [4] re-alized the reconstruction of the pore-fracture system ofdifferent marcolithotypes Tao et al [5] constructed acontinuous distribution model of pore space for coal res-ervoirs Tao et al [6] determined the pore and fracturesystem by using the low-field nuclear magnetic resonancetechnique However the cost of sampling and testing is toohigh for a large-scale industrial production application Inconventional logging interpretation the quantitative cal-culation of porosity usually adopts the theoretical porositymodel of density acoustic time difference and compensatedneutron logging or establishes the regional empirical po-rosity model combined with core analysis [7ndash9] Howeverfrom the point of view of the interpretation model pa-rameter selection and mathematical processing method it isdifficult to establish a good mapping relationship between

HindawiMathematical Problems in EngineeringVolume 2021 Article ID 6641678 13 pageshttpsdoiorg10115520216641678

the porosity and core analysis data and the results are greatlyaffected by human factors Nuclear magnetic resonance(NMR) porosity logging is basically not affected by rockskeleton and only detects the signal of the fluid contained inpores )erefore this method has a strong advantage inpredicting formation porosity [10] but due to the limitationof equipment and cost this method cannot cover the loggingwork of the whole research area In other words if we canmake full use of various logging parameters to compre-hensively model and prediction of porosity we can avoid notonly the errors caused by man-made subjective factors butalso the use of special methods such as expensive petro-physical experiments and nuclear magnetic resonancetechniques In a word making full use of logging parametersto establish a porosity prediction model is expected to obtainaccurate reservoir porosity information with high efficiencyand low cost

At present some conventional machine learning algo-rithms have been applied to reservoir evaluation parameterprediction such as BP neural network [11ndash14] supportvector machine [15 16] and other shallow machine learningalgorithms [17ndash21] However shallow machine learningmethods often need to extract artificial feature parameterswhich requires strong domain knowledge and experienceMoreover the ability of shallow machine learning to rep-resent complex functions is limited in the case of limitedsamples and its generalization ability is limited for complexnonlinear problems [22] Traditional BP neural network hassome problems such as slow convergence speed and easy tofall into local optimal solution )e most important dif-ference between deep learning and shallow learning is theincrease of network structure depth Deep neural networkusually has more than three hidden layers )rough layer bylayer feature extraction and transformation the samples aretransformed from the original spatial feature representationto the new high-dimensional feature space for representa-tion and description thus simplifying and improving thehigh accuracy of classification or regression predictionproblems Hinton et al [23] revealed that the greatest valueof neural networks lies in automatic extraction and featureextraction It avoids the tedious work of manual featureextraction and can automatically find complex and effectivehigh-order features

At present the commonly used deep learning methodsmainly include convolutional neural network (CNN) re-current neural network (RNN) and stack auto encoder(SAE) )ese methods have been successfully applied in thefields of image processing and speech recognition [24ndash26]Compared with the shallow machine learning method thedeep learning method has higher prediction accuracy Be-cause the sedimentary process of strata is gradual in timeseries porosity is the response of the sedimentary charac-teristics of strata so it has certain time series characteristicsWhen only using machine learning or deep learning methodto predict physical property parameters it is easy to ignorethe variation trend of porosity parameters with reservoirdepth and the correlation between historical data of differentformation parameters Recurrent neural network (RNN) is atypical deep neural network structure Compared with fully

connected neural networks the biggest difference is thateach hidden layer unit is not independent of each other andeach hidden layer is not only related to each other but alsorelated to the timing input before the acceptance time of thehidden layer unit accepts )is feature is of great help to theprocessing of time series related data )e long short termmemory (LSTM) is an improvement of the conventionalRNN )e problem of gradient disappearance in conven-tional RNN is solved by the fine design of network ring It isone of the most successful recurrent neural networks Long-term and short-term memory (LSTM) is an improvement oftraditional RNN )e problem of gradient disappearance inconventional RNN is solved by the fine design of networkring It is one of the most successful recurrent neural net-works LSTM is very suitable for solving time series prob-lems but there are still some problems such as complex gridstructure many training parameters and slow convergencein training process Gated recurrent unit (GRU) neuralnetwork is the optimization of the LSTM network which hasthe same function as the LSTM network but the formerconvergences faster )e GRU network has been applied inthe fields of power transportation and finance [27ndash30] butit has not been used to predict reservoir porosity parameter

In summary this study will use GRU neural network topredict reservoir porosity parameters on the bias of con-ventional logging parameters Firstly the correlation anal-ysis (CA) based on Copula function is used to quantitativelycalculate the nonlinear correlation between logging curvesand porosity parameters and the logging parameters withhigher correlation degree with porosity parameter are se-lected )en based on the optimized logging parameters anonlinear mapping model between logging data and po-rosity parameters is established by using GRU neural net-work In addition in order to prove the advantages of theCA-GRU model in series porosity prediction RNN GRUand MLR models are established as comparison modelsFinally the model proposed in this paper is applied to theactual data test which proves the prediction accuracy androbustness of the proposed method

2 Theory and Methodology

21 Correlation Analysis Logging curves and porosity pa-rameters reflect the characteristics of different depths ofstrata To a certain extent there is a certain correlationbetween the porosity parameters and the logging curves butthe test data often contain a variety of parameters reflectingdifferent formation information from different angles Inpractical application if all sample data are directly used toestablish the mapping relationship model between loggingcurves and porosity parameters it will not only increase thecomplexity of the model but also lead to the loss of usefulinformation or the increase of redundant informationresulting in the decrease of prediction accuracy When somephysical parameters need to be predicted in the research it isnecessary to consider the influence of different loggingcurves on the prediction accuracy of physical parametersFor example through linear correlation analysis some re-liable representative and sensitive curves in logging data are

2 Mathematical Problems in Engineering

selected as the input of modeling However Pearson linearcorrelation coefficient is often used to evaluate the corre-lation [13] Pearson correlation coefficient analysis onlyfocuses on linear correlation and often ignores the non-linear relationship between porosity parameters and log-ging curves )erefore when the relationship betweenlogging data and prediction parameters is nonlinear it isnot reliable to measure the correlation with the linearcorrelation coefficient If Copula function is used to analyzethe correlation between logging data and prediction pa-rameters the influence of the nonlinear correlation be-tween parameters can be weakened to a certain extentBased on Copula function and its derived correlation indexthe nonlinear and asymmetric correlation between loggingcurve and predicted physical parameters can be accuratelymeasured )erefore Kendall rank correlation coefficient τand Spearman rank correlation coefficient ρ based onCopula function are used to quantitatively analyze thecorrelation between logging curves and porosity parame-ters Among them Kendall rank correlation coefficient τcan be used to measure the consistency change degreebetween logging parameters and porosity parameters andspearman rank correlation coefficient ρ can be used tomeasure the monotonic correlation degree between loggingcurves and porosity parameters )e calculation results arecompared with those calculated by Pearson linear corre-lation coefficient

Copula function theory accurately describes the corre-lation between nonlinear and asymmetric variables )edetails are as follows suppose that the marginal probabilitydistribution functions of an n-valued random variabledistribution function (H) are F(x1) F(x2) F(xn) re-spectively Where x1 x2 xn is an n-dimensional randomvariable there is a Copula function (C) which satisfies thefollowing conditions

H x1 x2 xn( 1113857 C F1 x1( 1113857 F2 x2( 1113857 Fn xn( 11138571113858 1113859 (1)

where N dimensional function (t minus Copula) is defined asfollows [31 32]

c(μ ρ υ) |ρ|minus 12

Γ(υ + N2)[Γ(υ2)]Nminus 1 1 + 1υξminus 1ρminus 1ξ1113872 1113873

minus υ+N2

[Γ(υ + 12)]N

1113937Nn1 1 + ξ2υ1113872 1113873

minus υ+12

⎧⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎩

(2)

where ρ is the N-order symmetric positive definite matrix ofall elements on the diagonal |ρ|is the determinant of matrixρ Γ(middot) is the distribution function of gamma distribution υis the degree of freedom of t minus Copula function with Nvariables ζ [tυ(μ1) tυ(μ2) tυ(μN)] tυ(middot) is the inversefunction of the univariate t distribution with υ degrees offreedom and μi(i 1 2 N) is the input independentvariable

Kendall rank correlation coefficient τ is used to measurethe degree of consistency change between x and y Suppose(x1 y1) and (x2 y2) are independent and identically dis-tributed vectors x1 x2 isin x y1 y2 isin y )en there is

τ P x1 minus x2( 1113857 y1 minus y2( 1113857gt 01113858 1113859 minus P x1 minus x2( 1113857 y1 minus y2( 1113857lt 01113858 1113859

(3)

where P is the probability distribution function After de-riving the above formula we get

τ 2P x1 minus x2( 1113857 y1 minus y2( 1113857gt 01113858 1113859 minus 1

τ isin [minus 1 1]1113896 (4)

Suppose that the Copula function corresponding to(x1 y1) is C1(μ υ) )en the Kendall rank correlationcoefficient τ can be obtained from the corresponding Copulafunction as follows

τ 411139461

011139461

0C1(μ υ)dC1(μ υ) minus 1 (5)

For Spearman rank correlation coefficient ρ suppose thatthe joint distribution function of (x y) is H(x y) themarginal distribution functions of x and y are F(x) andG(y) respectively if H(x y) F(x)G(y) then the randomvariables x and y are independent of each other If x0 isin xy0 isin y then x0 and y0 are independent of each other If(x y) and (x0 y0) are independent of each other then thereis

ρ 3 P x minus x0( 1113857 y minus y0( 1113857gt 01113858 1113859 minus P x minus x0( 1113857 y minus y0( 1113857lt 01113858 11138591113864 1113865

(6)

Suppose that the Copula function of (x y) is C(μ υ)where μ F(x) and υ G(y) the Spearman rank corre-lation coefficient ρ can also be obtained from the corre-sponding Copula function as follows

ρ 1211139461

011139461

0μυ dC(μ υ) minus 3 (7)

22 Recurrent Neural Network (RNN) Recurrent neuralnetwork (RNN) is a kind of neural network which is used toprocess sequence data [33] In different time steps RNNcircularly shares weights and makes connections across timesteps )e RNN structure with only one hidden layer isshown in Figure 1 Compared with multilayer perceptronthe RNN hidden layer is not only connected with the outputlayer but also connected with the hidden layer nodes)at isthe output of the hidden layer is transmitted not only to theoutput layer but also to the hidden layer itself )is makesRNN not only reduce the number of parameters but alsoestablish a nonlinear relationship between the sequence dataat different times )erefore RNN has unique advantages indealing with nonlinear and time series problems

23 Long- and Short-Term Memory (LSTM) NetworkLong- and short-term memory (LSTM) network is an im-portant improvement of RNN It can effectively solve theproblems of RNN gradient disappearance and gradientexplosion and make the network have stronger memoryability In addition LSTM network can also rememberlonger historical data information which not only has an

Mathematical Problems in Engineering 3

external RNN cycle structure but also has an internal ldquoLSTMcellrdquo circulation (self-circulation))erefore LSTM does notsimply impose an element by element nonlinearity on theaffine transformation of input and loop units It is similar tothe common recirculating network and each unit not onlyhas the same input and output structure but also has a gatecontrol unit system with more parameters and control in-formation flow )e structure of LSTM hidden layer isshown in Figure 2 Ctminus 1 is the node state of the previoussequence of hidden layers htminus 1 is the output of the previoussequence of hidden layer nodes xt is the input for the hiddenlayer node of the current sequence Ct is the current se-quence hidden layer node state ht is the output of the hiddenlayer node of the current sequence σ is the nonlinear ac-tivation function of sigmoid and tanh is the hyperbolictangent function

Compared with RNN LSTM network is better atlearning the long-term dependence between sequence datawhile LSTM network has complex structure multi pa-rameters and slow convergence speed

24 Gated Recurrent Unit (GRU) Neural Network Gatedrecurrent unit (GRU) neural network as an importantvariant of LSTM network is the optimization and im-provement of LSTM network It inherits the ability of LSTMnetwork to deal with nonlinear and time series problemsMoreover it not only retains the memory unit function ofLSTM network but also simplifies the structure and reducesthe parameters thus greatly improving the training speed[34] )e structure of the GRU neural network is shown inFigure 3 where Zt represents the update gate state Rtrepresents the reset gate state andHt represents the pendingoutput of the current neuron GRU neural network improvesthe design of ldquogaterdquo on the basis of LSTM network that isthe original cell structure composed of three ldquoGatesrdquo isoptimized to a cell structure composed of two ldquoGatesrdquo Inshort the gating cycle unit is consists of a reset gate and anupdate gate

)e state of reset gate and update gate at time t is definedas rt and Ζt respectively

rt σ Wrxt + Urhtminus 1( 1113857

Ζt σ Wzxt + Uzhtminus 1( 11138571113896 (8)

where W and U are weight matrices and xt is input data )ehidden state ht and the candidate hidden state 1113957ht can becalculated according to the following formula

ht 1 minus Ζt( 1113857htminus 1 + Ζt1113957ht

1113957ht tan h Whxt + Ur rt lowast htminus 1( 11138571113858 1113859

⎧⎨

⎩ (9)

Two different activation functions in equations (8) and(9) can be defined as follows

σ(x) 1

1 + exp(x)

tan h(x) 1 minus exp(2x)

1 + exp(2x)

⎧⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎩

(10)

25 Structure of GRU Neural Network Prediction ModelIn the prediction of reservoir porosity parameters thelogging curves from shallow to deep reflect the formation

times

σ σ tanh

tanh

times

σ

times

Ct-1

ht-1

Ct

Input xt

Output ht

ht

+

Figure 2 Hidden layer structure of LSTM network

times +

timestimes

1-Zr

σ σ

Ct-1 Ct

Input xt

Output ht

tanh

rt zt ht

Figure 3 Schematic diagram of GRU neural network

Input layer

Hidden layer

Output layer

Figure 1 Schematic diagram of RNN structure with only onehidden layer

4 Mathematical Problems in Engineering

characteristics of different geological periods and theirchange trends include important information of physicalparameters [14] When using the traditional statisticalanalysis and the conventional machine learning method topredict the porosity parameters it is easy to destroy thepotential internal relations in the historical series of loggingparameters and reduce the accuracy of the prediction resultsUnlike other machine learning or deep learning methodsGRU neural network has long-term memory ability [35] Bydealing with the long-term dependence between series dataGRU neural network can effectively reduce the influence ofsuch relationships and its internal control mechanism canalso automatically learn time-series features [36] Figure 4shows the structure of the three-layer GRU neural networkmodel

As can be seen from Figure 4 the structure of the GRUneural network model includes input layer hidden layerand output layer in which the hidden layer is the core part ofnetwork structure In the process of training it is necessaryto optimize and adjust the super parameters of the GRUneural network model structure including the number ofhidden layer layers and the number of hidden layer neurons)eoretically the more hidden layers and the number ofneurons the better the model performance the deeper andmore complex the network the higher the prediction ac-curacy However some studies [13 18] have shown that toomany hidden layer numbers and neuron numbers will leadto training difficulties and over fitting which will reduce theprediction accuracy of the model If the network is tooshallow and simple it will easily lead to insufficient fittingand fail to meet the expected requirements )erefore theselection of the number of hidden layers and the number ofneurons is very important to the prediction performance ofthe network We need to balance the learning ability of thenetwork with the complexity of the training and the re-quirements of the prediction accuracy and determine theoptimal value of nodes and number of hidden layersaccording to experience and many experimental results Inaddition the optimization of training parameters such aslearning rate batch size and maximum iteration times canreduce the complexity of the model to a certain extent andimprove the convergence speed and prediction accuracy ofthe model

)e training process of GRU neural network can beroughly divided into three steps as follows

Step 1 input training data into the network calculatethe output of GRU neural network unit from shallowlayer to deep layer along the forward propagation di-rection and obtain the predicted output value corre-sponding to the input data at the current time pointStep 2 the error of each neuron is calculated along theback-propagation direction Error back-propagation ofthe GRU neural network includes propagation alongthe time sequence and the propagation layer by layerbetween the network layersStep 3 the gradient of each weight is calculatedaccording to the error calculated by back-propagation

and the parameters of the weight gradient adjustmentnetwork are calculated by using the learning rate-adaptive optimization algorithm (Adam algorithm)Finally repeat the above steps to iteratively optimizethe network

26 Prediction Model Based on CA-GRU )e modelingprocess of the combination forecasting model based on CA-GRU is shown in Figure 5 which mainly includes the fol-lowing six steps

Step 1 the obtained logging curves and porosity pa-rameters are used as the database )e Kendall rankcorrelation coefficient τ Spearman rank correlationcoefficient ρ and Pearson linear correlation coefficientP based on Copula function are used to quantitativelycalculate and analyze the correlation degree betweenthem and the logging curves sensitive to the predictionparameters are selected to form new sample dataStep 2 the new sample data are standardized anddivided into the training set and test set according to acertain proportionStep 3 the GRU neural network model is constructedfor porosity prediction and the network parameters areinitialized )e number of network layers and thenumber of hidden layer neurons are determinedaccording to the experimentStep 4 during the training process the networkstructure is continuously optimized until the trainingerror of the model reaches the preset target and thenthe model is savedStep 5 test the trained GRU neural network modelwith the segmented test set and deformalize the pre-dicted value of the model to obtain the predicted valueof porosity parameters corresponding to the actualvalueStep 6 the predicted value is compared with the actualvalue and the error is analyzed )e prediction

h0

x0 x1

h1

GRU3 GRU3 GRU3

GRU3 GRU3 GRU3

GRU3 GRU3 GRU3

hellip

hellip

hellip

hellip

hellip

Hiddenlayer

Input layer

Output layer

ht

xt

Figure 4 Structure diagram of three-layer GRU neural networkmodel

Mathematical Problems in Engineering 5

performance of the model is evaluated according to thecorresponding evaluation index

3 Data Processing and Analysis

31Data Preparation As shown in Figure 6(a) Ordos Basinis a large superimposed basin in the central part of Chinawith huge oil and gas resources In China it is considered asone of the basins with the greatest potential for oil and gasreserves and production growth in China and it is a pet-roliferous basin with stratigraphic and lithologic traps as itsmain structural traps )e accumulated exploration ofnatural gas reserves in Ordos Basin is about 27times1012m3and the geological resources to be explored are about125times1012m3 which indicates that Ordos Basin is still in theearly stage of exploration In addition Ordos Basin is alsorich in novel oil and gas resources such as coalbed methaneshale gas and tight sandstone gas Additionally the amountof geology resources of coalbedmethane in the basin is about986times1012m3 of which the recoverable amount is about179times1012m3 of recoverable )e amount of geologicalresources of shale gas in the basin is about 53times1012m3 ofwhich the recoverable reserves are about 291times 1012m3 )egeological resources of tight sandstone gas in the basin areabout 784times1012m3 of which the recoverable amount isabout 353times1012m3 [18] For conventional or unconven-tional oil and gas resources it can be seen that Ordos Basinhas great exploration potential)erefore a fast and accuratemethod is needed to obtain porosity information which isan important parameter for oil and gas exploration

As shown in Figure 6(b) well E1 is regarded as shale gaswell in the northeastern part of Ordos Basin China It is avery important step for the database including logging datato prepare for the construction of the model In this studyavailable well logs of well E1 include the spontaneous po-tential (SP) compensated neutron log (CNL) compressionalwave slowness (DTC) resistivity (RT) Uranium (U) naturalgamma-ray (GR) bulk density (DEN) Potassium (K) and)orium (TH) Table 1 shows the summary of the recordedlogging parameters data for well E1

Figure 7 presents the logging parameters plot for well E1)is study is very vital and meaningful to oil and gas ex-ploration areas since it is very important to obtain porosityinformation based on logging parameters in reservoirevaluation

32 Data Analysis For the model it is very important toselect suitable logging input when preparing for accurateporosity prediction In this study Kendall rank correlationcoefficient τ Spearman rank correlation coefficient ρ andPearson linear correlation coefficient R based on Copulafunction are used to quantitatively calculate the correlationbetween logging parameters and porosity parameters )ecomparison of absolute values of three correlation coeffi-cients is shown in Figure 8

As can be seen from Figure 8 Pearson correlationanalysis often ignores the nonlinear correlation betweenlogging parameters and porosity In the correlation analysisbetween logging data and porosity the correlation coeffi-cients of DTC DEN and CNL with porosity are relatively

Start

Data

Correlation analysis

Data preprocessing

Training set and testing set

Testing set Training set

Model parameter initialization

Devising GRU model

Training GRU model

Achieving expected results

Save model

Adjusting parameter

Testing the trained model

Get the prediction results

No

Yes

Figure 5 Modeling flow chart of porosity parameter prediction model based on CA-GRU

6 Mathematical Problems in Engineering

high indicating that DTC DEN CNL and porosity have astrong correlation with porosity Although the Pearsoncorrelation coefficient of GR and porosity is relatively lowhowever the correlation analysis method based on Copulafunction obtains higher τ and ρ which shows that the linearcorrelation between GR and porosity is low but the non-linear correlation is high and there is a strong nonlinearcorrelation between them To sum up linear correlation andnonlinear correlation analysis methods are used to optimizeconventional logging parameters in this study Finally fourlogging parameters DTC DEN CNL and GR are selectedas independent variables of network modeling and a po-rosity prediction model is established

4 Results and Discussion

As mentioned above a set of 5165 data points from well E1 hasbeen used to build themodel)is set of data is divided into testand training subsets by depth In this study the training subsetconsists of the first 3874 of all data while the test subset oftesting consists of the rest data points In comparison RNNGRU CA-GRU and multiple linear regression (MLR) modelswere selected and applied to predict porosity

In the beginning all data are standardized ranging fromzero to one with the following equation (11) is used to testand train data sets in the RNN GRU and CA-GRU modelsIn addition the convergence of neural nets may be

BeijingCityStudy area

0 800 1600km 0 38000 76000

km

Weibei uplif

Qin yang

Hua chiBelt

Margin

Western

Ji yuan

Ding bian Jin bian

Yu lin

Wu shen qi

Well E1 Shen mu

Belt

Fold

Jinxi

Yimengi uplift

Hang jin qiErdos shi

Yin chuan

35deg

36deg

37deg

38deg

39deg

40deg

35deg

36deg

37deg

38deg

39deg

40deg

107deg 108deg 109deg 110deg

107deg 108deg 109deg 110deg

Basin boundary faultTectonic element boundaryWell E1City

N N

(a) (b)

Figure 6 (a))e position of Ordos Basin in China (b) the structural belts of Ordos Basin and the sampling well near the Yimeng uplift [15]

Table 1 Summary of the recorded logging parameters for well E1

SP (mV) GR (API) TDC (μsft) RT (Ωmiddotm) U (ppm) K () TH (ppm) DEN (gcm3) CNL ()Miv 6764 2065 5442 944 209 040 171 245 739Mav 7498 6675 9156 5201 736 233 1190 272 2727Average 7217 4404 7433 2329 439 108 543 257 1618SD 138 774 689 709 108 031 177 004 378Mivminimum value Mavmaximum value SD standard deviation

Mathematical Problems in Engineering 7

guaranteed by the preprocessing which made the calculatingspeed of network methods increase

xlowast

x minus xmin

xmax minus xmin (11)

where xlowast is the normalized data x is the original data themaximum and minimum of the original dataset are rep-resented by xmax and xmin respectively

In the study various concepts related to statistics includingcorrelation coefficient (R) mean absolute error (MAE) variance

accounted for (VAF) and root mean square error (RMSE) areused to compare performance prediction In addition theseperformance indicators could provide a sense of how great theperformance of the prediction model is in terms of the actualvalue )e following equations give the above standard tools

R criteria can be described as follows

R

1 minus1113936

ni1 (1113954p minus p)i1113858 1113859

2

1113936ni1 (1113954p)

2i minus (1n) 1113936

ni1 (1113954p)

2i

11139741113972

(12)

MAE criteria may be depicted as follows

MAE 1n

1113944

n

i1(1113954p minus p) (13)

VAF is usually used to assess the accuracy of one modelthrough making a comparison between the assessed valuesand the evaluated output of the model and VAF criteria canbe computed with this equation as follows

VAF 1 minusvar (1113954p)i minus (p)i1113858 1113859

var (1113954p)i1113858 11138591113888 1113889 times 100 (14)

)e RMSE is traditionally applied to monitor the qualityof error function of the model )e performance of themodel increases with RMSE decreasing RMSE criteria canbe computed with this equation as follows

RMSE

1n

1113944

n

i1(1113954p)i minus (p)i1113858 1113859

2

11139741113972

(15)

where for equations (12)ndash(15) p is the measured porosity 1113954p

denotes the assessed porosity while n is the amount of testingdata points

0 5 10 70 75 40 80 2 4 6 8 60 90 24 26 10 20 0 1 2 0 20 40 60

0 5 10 70 75 40 80 2 4 6 8 60 90 24 26 10 20 0 1 2 0 20 40 60

3440

3420

3400

3380

3360

3340

Dep

th (m

)TH (10ndash6) U (10ndash6)SP (mv) GR (API) DTC (μs) DEN (gcm3) CNL () K () RT (Ωm)

Figure 7 Logging parameters plot for well E1

SP GR U DTC DEN CNL KR TTHLogging parameters

0

005

01

015

02

025

03

035

04

045

Corr

elat

ion

coef

ficie

nt

ρR

T

Figure 8 Correlation between logging data and porosityparameter

8 Mathematical Problems in Engineering

In the study of machine learning robustness is a keycharacteristic Considering that the method of selectingtraining and testing data sets avoids the impact on the ro-bustness of GRU we randomly selected ten training andtesting set from 5165 data points in well E1 )e randomlyselected sets are shown in Figure 9(a) where the colorrepresents the corresponding depth of the sample For eachof these ten cases GRU can be modeled and its RMSE of thetraining set and the testing set can be calculated respectivelyand the results are shown in Figure 9(b)

As shown in Figure 9(b) the difference of trainingsamples will lead to inconsistent prediction errors of GRU)erefore it can be concluded that the selection of trainingsamples will bring about changes in GRU prediction errorsIn order to further analyze the influence of different trainingsamples of GRU forecasting error we use statisticalmethods )e above ten cases of RMSE data have carried onthe single factor analysis of variance (the confidence level of5) and the analysis results are shown in Table 2P 0569gt 005 indicating that there are no significantdifferences between RMSE data of 10 cases )roughcomparative analysis we can safely conclude that differenttraining samples had differences in the prediction error ofGRU but there is no significant difference in statistics Itmeans from a statistical point of view the selection methodof training and testing samples had little influence on therobustness of GRU)erefore this study divides the trainingsample set and the test set according to the depth orderwhich conforms to the statistical law and is feasible andpractical

As we all know determining the parameters of the neuralnetwork model is the premise of the successful constructionof the network model In this study adaptive learning rateoptimization algorithm (Adam algorithm) is used to opti-mize the network Adam algorithm combines the advantagesof RMSProp algorithm and AdaGrad algorithm and can

design the independent adaptive learning rate for differentparameters According to the test the best learning rate is0005 the batch is 10 and the time step is 50 According tothe previous studies [18 37] on neural networks the numberof hidden layers and hidden layer nodes has great influenceon the prediction performance of neural networks Fordifferent research fields the number of hidden layers andhidden layer nodes are different Choosing the optimalnumber of hidden layers and hidden layer nodes is the key toensure the prediction accuracy of neural network model)erefore based on the ergodic optimization thinking thisstudy sets the hidden layer value range as [1 10] and thehidden layer node value range as [1 100] By comparing theroot mean square error (RMSE) of the model under differenthidden layers and hidden layer nodes the hidden layer valueand hidden layer node value of the model under the min-imum root mean square error (RMSE) are obtained )eoptimization test results are shown in Figure 10 It can beseen from the figure that too many or too few hidden layersand neurons in the network will lead to drastic changes inroot mean square error (RMSE) of prediction resultsresulting in the decrease of prediction accuracy )roughtraversal optimization the optimal number of hidden layersand nodes for this study is 3 and 41 respectively

According to the above correlation analysis results fourlogging parameters DTC DEN CNL and GR which havestrong correlation with porosity are selected as themodeling

Table 2 )e results of one-way ANOVA

Sum ofsquares df Mean

square F P value

Betweengroups 0077 9 0009 0883 0569

Within groups 0097 10 0010Total 0174 19

3340

3360

3380

3400

3420

3440

Dep

th (m

)

Case

1

Case

3

Case

4

Case

5

Case

6

Case

7

Case

8

Case

9

Case

10

Case

2

Random sets

500045004000350030002500200015001000

500Se

eds

(a)

002040608

112141618

2

RMSE

Case

2

Case

3

Case

4

Case

5

Case

6

Case

7

Case

8

Case

9

Case

10

Case

1

Random sets

TrainingTesting

(b)

Figure 9 Results of ELM with randomly selected training set (a) Ten cases of randomly selected sets (b) )e RMSE of training and testingfor each case

Mathematical Problems in Engineering 9

independent variables of the porosity prediction modelFigure 11 shows the comparison between the actual andpredicted porosity of RNN GRU California-GRU and

MLR models with the depth of E1 well as well as the localenlargement operation It shows that the results of MLRmodels are quite different from the measured porosity while

2

18

16

14

12

2181614121

RMSE

108

64

2

Number of hidden layers Number of hidden layer nodes20

40 6080

100

Figure 10 Model performance under different hidden layers and node numbers

Porosity ()0 5 10 15 20 25

3330

3360

3390

3420

3450

Dep

th (m

)

3370

3375

3380

3385

3390

MeasuredPredicted

0 5 10

(a)

Porosity ()0 5 10 15 20 25

3330

3360

3390

3420

3450

Dep

th (m

)3370

3375

3380

3385

3390

MeasuredPredicted

0 5 10

(b)

Porosity ()0 5 10 15 20 25

3330

3360

3390

3420

3450

Dep

th (m

)

3370

3375

3380

3385

3390

MeasuredPredicted

0 5 10

(c)

Porosity ()0 5 10 15 20 25

3330

3360

3390

3420

3450

Dep

th (m

)

3370

3375

3380

3385

3390

MeasuredPredicted

0 5 10

(d)

Figure 11 A comparison between measured and predicted porosity with RNN GRU CA-GRU and MLR models respectively in well E1(a) A comparison between measured and RNN predicted porosity (b) A comparison between measured and GRU predicted porosity (c) Acomparison between measured and CA-GRU predicted porosity (d) A comparison between measured and MLR predicted porosity

10 Mathematical Problems in Engineering

the results of RNN GRU and California-GRU models arevery consistent with the actual situation

Eventually the statistical indicators mentioned previ-ously were also employed to carry out this comparison andoutcomes of this comparison were presented in Table 3What can be shown in Table 3 was that the three kinds ofporosity prediction models based on deep learning methodwere far superior to those established with the traditionalmultiple linear regression (MLR) method in their predictionaccuracy And by comparing the prediction precision of thethree types deep learning models (RNN GRU and CA-GRU) established in this study it can be evidently shownthat the GRU and CA-GRU models were superior to RNNmodel in prediction precision among which CA-GRU tookthe best precision due to the highest R of 09423 and VAF of887578 and the lowestMAE of 02101 and RMSE of 11412)rough the comparison between the prediction precision ofCA-GRU and GRU there was just slight discrepancy be-tween their precision

To conclude both CA-GRU and GRU models couldprovide a successful porosity prediction performance )eCA-GRU model showed a higher precision in predictingporosity and from Table 3 which we can conclude that CA-GRU modeled in this study took a higher efficiency inpredicting the porosity with its high precision And theverification results in well E1 prove that the CA-GRUmodelwith optimal inputs could be regarded as an efficient tool forpredicting the porosity in particular in the area where thehigh precision porosity data is required

5 Conclusions

Logging parameters reflect the sedimentary characteristics ofacoustic discharge and electric in different geological pe-riods Porosity is the characteristic response of differentformations and has strong time series characteristics Di-rectly predicting porosity by using logging parameters caneffectively reduce the high cost of using special methods suchas rock physical analysis and nuclear magnetic resonancewhich can provide accurate and low-cost decision-makingbasis for the petroleum exploration and development in oil-gas field Deep learning technology can find the nonlinearrelationship between different parameters completely fromthe data which is very suitable for solving the problem ofnonlinear geophysical interpretation It can not only makefull use of the response characteristics of various loggingparameters to different formations at the same time but alsocan get rid of the limitations of linear prediction of tradi-tional empirical formula

Considering the time series characteristics of loggingparameters and porosity parameters a reservoir porosityprediction method based on GRU neural network isproposed in this study By using correlation analysismethod based on Copula function to select sensitivelogging parameters and then uses GRU neural network tobuild prediction model which not only considers theinfluence of strong correlation sample data on the pre-diction of porosity parameters but also takes into accountthe nonlinear mapping relationship between porosityparameters and logging curves as well as the changetrend and correlation of logging information with depth)e correlation measure method based on Copulafunction can optimize the well logging curves which aresensitive to porosity parameters reduce the dimension ofmodel input eliminate the redundancy between vari-ables and improve the overall prediction performance ofthe model )e research results show that the neuralnetwork model of GRU neural network has strong featureextraction ability and can effectively extract deep char-acteristics reflecting porosity parameters from loggingdata Compared with deep learning models such asmultiple linear regression analysis it can predict porosityparameters more accurately and has strong robustnessand anti-interference ability )is study provides a newidea for accurate interpretation of logging data in oil-gasfield exploration and development

Abbreviations

DNN Deep neural networkGRU Gated recurrent unitNMR Nuclear magnetic resonanceCNN Convolutional neural networkRNN Recurrent neural networkSAE Stack autoencoderU UraniumDEN Bulk densityTH )oriumMAE Mean absolute errorRMSE Root mean square errorRNN Recurrent neural networkLSTM Long short-term memoryCA Correlation analysisMLR Multiple linear regressionSP Spontaneous potentialCNL Compensated neutron logDTC Compressional wave slownessRT ResistivityGR Natural gamma-rayK PotassiumR Correlation coefficientVAF Variance accounted forCA-GRU Correlation analysis-gated recurrent unit

Data Availability

)e data used to support the findings of this study areavailable from the corresponding author upon request

Table 3 Comprising CA-GRU model with RNN GRU and MLRmodels by using four performance indicators including correlationcoefficient (R) mean absolute error (MAE) variance accounted for(VAF) as well as root mean square error (RMSE)

Method R MAE VAF() RMSERNN 09034 02728 815798 14952GRU 0928 02303 861089 12643CA-GRU 09423 02101 887578 11412MLR 08489 09035 717449 22589

Mathematical Problems in Engineering 11

Conflicts of Interest

)e authors declare that there are no conflicts of interest

Acknowledgments

)e authors are grateful for the financial support providedby the National Natural Science Foundation of China(41504041) )e authors would also like to thank XiaoyanDeng for assisting in preparation of this manuscript par-ticularly grateful to her for checking grammar style andsyntax in the manuscript

References

[1] S Singh A I Kanli and S Sevgen ldquoA general approach forporosity estimation using artificial neural network method acase study from Kansas gas fieldrdquo Studia Geophysica etGeodaetica vol 60 no 1 pp 130ndash140 2016

[2] Z Zhong and T R Carr ldquoApplication of a new hybrid particleswarm optimization-mixed kernels function-based supportvector machine model for reservoir porosity prediction Acase study in Jacksonburg-Stringtown oil field West VirginiaUSArdquo Interpretation vol 7 no 1 pp T97ndashT112 2019

[3] A Bakhorji H Mustafa and S Aramco Rock PhysicsModeling and Analysis of Elastic Signatures for Intermediate toLow Porosity Sandstones pp 1ndash5 Society of ExplorationGeophysicists Houston TX USA 2012

[4] S Tao Z Pan S Chen and S Tang ldquoCoal seam porosity andfracture heterogeneity of marcolithotypes in the FanzhuangBlock southern Qinshui Basin Chinardquo Journal of NaturalGas Science and Engineering vol 66 pp 148ndash158 2019

[5] S Tao X Zhao D Tang C Deng Q Meng and Y Cui ldquoAmodel for characterizing the continuous distribution of gasstoring space in low-rank coalsrdquo Fuel vol 233 pp 552ndash5572018

[6] S Tao S Chen D Tang X Zhao H Xu and S Li ldquoMaterialcomposition pore structure and adsorption capacity of low-rank coals around the first coalification jump A case ofeastern Junggar Basin Chinardquo Fuel vol 211 pp 804ndash8152018

[7] C Zhang X Wang and L Q Zhu ldquoEstimation of totalporosity in shale formations from element capture loggingand conventional logging datardquo Arabian Journal of Geo-sciences vol 11 no 11 p 264 2018

[8] A Khaksar and C M Griffiths ldquolowastPorosity form sonic log ingas-bearing shaly sandstones field data versus empiricalequationsrdquo Exploration Geophysics vol 29 no 3-4pp 440ndash446 1998

[9] C Guo C Zhang and L Zhu ldquoPredicting the total porosity ofshale gas reservoirsrdquo Petroleum Science and Technologyvol 35 no 10 pp 1022ndash1031 2017

[10] H L Crow R J Enkin J B Percival and H A J RussellldquoDownhole nuclear magnetic resonance logging in glacio-marine sediments near Ottawa Ontario Canadardquo NearSurface Geophysics vol 18 no 6 p 591 2020

[11] J O Parra C Hackert M Bennett and H A CollierldquoPermeability and porosity images based on NMR sonic andseismic reflectivity application to a carbonate aquiferrdquo CeLeading Edge vol 22 no 11 pp 1102ndash1108 2003

[12] E P Leite and A C Vidal ldquo3D porosity prediction fromseismic inversion and neural networksrdquo Computers amp Geo-sciences vol 37 no 8 pp 1174ndash1180 2011

[13] X Shi G Liu Y Cheng et al ldquoBrittleness index prediction inshale gas reservoirs based on efficient network modelsrdquoJournal of Natural Gas Science and Engineering vol 35pp 673ndash685 2016

[14] P Wang S Peng and T He ldquoA novel approach to totalorganic carbon content prediction in shale gas reservoirs withwell logs data Tonghua Basin Chinardquo Journal of Natural GasScience and Engineering vol 55 pp 1ndash15 2018

[15] P Wang and S Peng ldquoA new scheme to improve the per-formance of artificial intelligence techniques for estimatingtotal organic carbon from well logsrdquo Energies vol 11 no 4p 747 2018

[16] F S Feng P Wang Z Wei et al ldquoA new method for pre-dicting the permeability of sandstone in deep reservoirsrdquoGeofluids vol 2020 Article ID 8844464 16 pages 2020

[17] A A Talkhestani ldquoPrediction of effective porosity fromseismic attributes using locally linear model tree algorithmrdquoGeophysical Prospecting vol 63 no 3 pp 680ndash693 2015

[18] P Wang and S Peng ldquoOn a new method of estimating shearwave velocity from conventional well logsrdquo Journal of Pe-troleum Science and Engineering vol 180 pp 105ndash123 2019

[19] F S T Haklidir and M Haklidir ldquoPrediction of reservoirtemperatures using hydrogeochemical data western anatoliageothermal systems (Turkey) a machine learning approachrdquoNatural Resources Research vol 29 no 4 pp 2333ndash23462020

[20] A A Mahmoud S Elkatatny and D Al Shehri ldquoApplicationof machine learning in evaluation of the static youngrsquosmodulus for sandstone formationsrdquo Sustainability vol 12no 5 p 1880 2020

[21] M He H Gu and H Wan ldquoLog interpretation for lithologyand fluid identification using deep neural network combinedwith MAHAKIL in a tight sandstone reservoirrdquo Journal ofPetroleum Science and Engineering vol 194 p 107498 2020

[22] Y Bengio ldquoLearning deep architectures for AIrdquo Foundationsand Trends in Machine Learning vol 2 no 1 pp 1ndash127 2009

[23] G E Hinton S Osindero and Y-W Teh ldquoA fast learningalgorithm for deep belief netsrdquo Neural Computation vol 18no 7 pp 1527ndash1554 2006

[24] L C Mao and J Zhao ldquoA survey on the new generation ofdeep learning in image processingrdquo IEEE Access vol 7pp 172231ndash172263 2019

[25] R Udendhran M Balamurugan A Suresh andR Varatharajan ldquoEnhancing image processing architectureusing deep learning for embedded vision systemsrdquo Micro-processors and Microsystems vol 76 p 103094 2020

[26] R J G Van Sloun R Cohen and Y C Eldar ldquoDeep learningin ultrasound imagingrdquo Proceedings of the IEEE vol 108no 1 pp 11ndash29 2020

[27] J Liu C Wu and J Wang ldquoGated recurrent units basedneural network for time heterogeneous feedback recom-mendationrdquo Information Sciences vol 423 pp 50ndash65 2018

[28] Y Hao Y Sheng and J Wang ldquoVariant gated recurrent unitswith encoders to preprocess packets for payload-aware in-trusion detectionrdquo IEEE Access vol 7 pp 49985ndash49998 2019

[29] F T Dezaki Z B Liao C Luong et al ldquoCardiac phasedetection in echocardiograms with densely gated recurrentneural networks and global extrema lossrdquo IEEE Transactionson Medical Imaging vol 38 no 8 pp 1821ndash1832 2019

[30] ZWang Y DongW Liu and ZMa ldquoA novel fault diagnosisapproach for chillers based on 1-D convolutional neuralnetwork and gated recurrent unitrdquo Sensors vol 20 no 9p 2458 2020

12 Mathematical Problems in Engineering

[31] H Colonius ldquoAn invitation to coupling and copulas withapplications to multisensory modelingrdquo Journal of Mathe-matical Psychology vol 74 pp 2ndash10 2016

[32] N Uyttendaele ldquoOn the estimation of nested Archimedeancopulas a theoretical and an experimental comparisonrdquoComputational Statistics vol 33 no 2 pp 1047ndash1070 2018

[33] P Vincent-Lamarre M Calderini and J-P )iviergeldquoLearning long temporal sequences in spiking networks bymultiplexing neural oscillationsrdquo Frontiers in ComputationalNeuroscience vol 14 p 78 2020

[34] W Huang Y Li and Y Huang ldquoDeep Hybrid NeuralNetwork and Improved Differential Neuroevolution forChaotic Time Series Predictionrdquo IEEE Access vol 8pp 159552ndash159565 2020

[35] Y Wu M Yuan S Dong L Lin and Y Liu ldquoRemaininguseful life estimation of engineered systems using vanillaLSTM neural networksrdquo Neurocomputing vol 275 pp 167ndash179 2018

[36] H Yan Y Qin S Xiang Y Wang and H Chen ldquoLong-termgear life prediction based on ordered neurons LSTM neuralnetworksrdquo Measurement vol 165 p 108205 2020

[37] X Shi J Wang G Liu L Yang X Ge and S Jiang ldquoAp-plication of extreme learning machine and neural networks intotal organic carbon content prediction in organic shale withwire line logsrdquo Journal of Natural Gas Science and Engi-neering vol 33 pp 687ndash702 2016

Mathematical Problems in Engineering 13

Page 2: Research Article On a Deep Learning Method of Estimating ...theporosityandcoreanalysisdata,andtheresultsaregreatly affected by human factors. Nuclear magnetic resonance (NMR) porosity

the porosity and core analysis data and the results are greatlyaffected by human factors Nuclear magnetic resonance(NMR) porosity logging is basically not affected by rockskeleton and only detects the signal of the fluid contained inpores )erefore this method has a strong advantage inpredicting formation porosity [10] but due to the limitationof equipment and cost this method cannot cover the loggingwork of the whole research area In other words if we canmake full use of various logging parameters to compre-hensively model and prediction of porosity we can avoid notonly the errors caused by man-made subjective factors butalso the use of special methods such as expensive petro-physical experiments and nuclear magnetic resonancetechniques In a word making full use of logging parametersto establish a porosity prediction model is expected to obtainaccurate reservoir porosity information with high efficiencyand low cost

At present some conventional machine learning algo-rithms have been applied to reservoir evaluation parameterprediction such as BP neural network [11ndash14] supportvector machine [15 16] and other shallow machine learningalgorithms [17ndash21] However shallow machine learningmethods often need to extract artificial feature parameterswhich requires strong domain knowledge and experienceMoreover the ability of shallow machine learning to rep-resent complex functions is limited in the case of limitedsamples and its generalization ability is limited for complexnonlinear problems [22] Traditional BP neural network hassome problems such as slow convergence speed and easy tofall into local optimal solution )e most important dif-ference between deep learning and shallow learning is theincrease of network structure depth Deep neural networkusually has more than three hidden layers )rough layer bylayer feature extraction and transformation the samples aretransformed from the original spatial feature representationto the new high-dimensional feature space for representa-tion and description thus simplifying and improving thehigh accuracy of classification or regression predictionproblems Hinton et al [23] revealed that the greatest valueof neural networks lies in automatic extraction and featureextraction It avoids the tedious work of manual featureextraction and can automatically find complex and effectivehigh-order features

At present the commonly used deep learning methodsmainly include convolutional neural network (CNN) re-current neural network (RNN) and stack auto encoder(SAE) )ese methods have been successfully applied in thefields of image processing and speech recognition [24ndash26]Compared with the shallow machine learning method thedeep learning method has higher prediction accuracy Be-cause the sedimentary process of strata is gradual in timeseries porosity is the response of the sedimentary charac-teristics of strata so it has certain time series characteristicsWhen only using machine learning or deep learning methodto predict physical property parameters it is easy to ignorethe variation trend of porosity parameters with reservoirdepth and the correlation between historical data of differentformation parameters Recurrent neural network (RNN) is atypical deep neural network structure Compared with fully

connected neural networks the biggest difference is thateach hidden layer unit is not independent of each other andeach hidden layer is not only related to each other but alsorelated to the timing input before the acceptance time of thehidden layer unit accepts )is feature is of great help to theprocessing of time series related data )e long short termmemory (LSTM) is an improvement of the conventionalRNN )e problem of gradient disappearance in conven-tional RNN is solved by the fine design of network ring It isone of the most successful recurrent neural networks Long-term and short-term memory (LSTM) is an improvement oftraditional RNN )e problem of gradient disappearance inconventional RNN is solved by the fine design of networkring It is one of the most successful recurrent neural net-works LSTM is very suitable for solving time series prob-lems but there are still some problems such as complex gridstructure many training parameters and slow convergencein training process Gated recurrent unit (GRU) neuralnetwork is the optimization of the LSTM network which hasthe same function as the LSTM network but the formerconvergences faster )e GRU network has been applied inthe fields of power transportation and finance [27ndash30] butit has not been used to predict reservoir porosity parameter

In summary this study will use GRU neural network topredict reservoir porosity parameters on the bias of con-ventional logging parameters Firstly the correlation anal-ysis (CA) based on Copula function is used to quantitativelycalculate the nonlinear correlation between logging curvesand porosity parameters and the logging parameters withhigher correlation degree with porosity parameter are se-lected )en based on the optimized logging parameters anonlinear mapping model between logging data and po-rosity parameters is established by using GRU neural net-work In addition in order to prove the advantages of theCA-GRU model in series porosity prediction RNN GRUand MLR models are established as comparison modelsFinally the model proposed in this paper is applied to theactual data test which proves the prediction accuracy androbustness of the proposed method

2 Theory and Methodology

21 Correlation Analysis Logging curves and porosity pa-rameters reflect the characteristics of different depths ofstrata To a certain extent there is a certain correlationbetween the porosity parameters and the logging curves butthe test data often contain a variety of parameters reflectingdifferent formation information from different angles Inpractical application if all sample data are directly used toestablish the mapping relationship model between loggingcurves and porosity parameters it will not only increase thecomplexity of the model but also lead to the loss of usefulinformation or the increase of redundant informationresulting in the decrease of prediction accuracy When somephysical parameters need to be predicted in the research it isnecessary to consider the influence of different loggingcurves on the prediction accuracy of physical parametersFor example through linear correlation analysis some re-liable representative and sensitive curves in logging data are

2 Mathematical Problems in Engineering

selected as the input of modeling However Pearson linearcorrelation coefficient is often used to evaluate the corre-lation [13] Pearson correlation coefficient analysis onlyfocuses on linear correlation and often ignores the non-linear relationship between porosity parameters and log-ging curves )erefore when the relationship betweenlogging data and prediction parameters is nonlinear it isnot reliable to measure the correlation with the linearcorrelation coefficient If Copula function is used to analyzethe correlation between logging data and prediction pa-rameters the influence of the nonlinear correlation be-tween parameters can be weakened to a certain extentBased on Copula function and its derived correlation indexthe nonlinear and asymmetric correlation between loggingcurve and predicted physical parameters can be accuratelymeasured )erefore Kendall rank correlation coefficient τand Spearman rank correlation coefficient ρ based onCopula function are used to quantitatively analyze thecorrelation between logging curves and porosity parame-ters Among them Kendall rank correlation coefficient τcan be used to measure the consistency change degreebetween logging parameters and porosity parameters andspearman rank correlation coefficient ρ can be used tomeasure the monotonic correlation degree between loggingcurves and porosity parameters )e calculation results arecompared with those calculated by Pearson linear corre-lation coefficient

Copula function theory accurately describes the corre-lation between nonlinear and asymmetric variables )edetails are as follows suppose that the marginal probabilitydistribution functions of an n-valued random variabledistribution function (H) are F(x1) F(x2) F(xn) re-spectively Where x1 x2 xn is an n-dimensional randomvariable there is a Copula function (C) which satisfies thefollowing conditions

H x1 x2 xn( 1113857 C F1 x1( 1113857 F2 x2( 1113857 Fn xn( 11138571113858 1113859 (1)

where N dimensional function (t minus Copula) is defined asfollows [31 32]

c(μ ρ υ) |ρ|minus 12

Γ(υ + N2)[Γ(υ2)]Nminus 1 1 + 1υξminus 1ρminus 1ξ1113872 1113873

minus υ+N2

[Γ(υ + 12)]N

1113937Nn1 1 + ξ2υ1113872 1113873

minus υ+12

⎧⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎩

(2)

where ρ is the N-order symmetric positive definite matrix ofall elements on the diagonal |ρ|is the determinant of matrixρ Γ(middot) is the distribution function of gamma distribution υis the degree of freedom of t minus Copula function with Nvariables ζ [tυ(μ1) tυ(μ2) tυ(μN)] tυ(middot) is the inversefunction of the univariate t distribution with υ degrees offreedom and μi(i 1 2 N) is the input independentvariable

Kendall rank correlation coefficient τ is used to measurethe degree of consistency change between x and y Suppose(x1 y1) and (x2 y2) are independent and identically dis-tributed vectors x1 x2 isin x y1 y2 isin y )en there is

τ P x1 minus x2( 1113857 y1 minus y2( 1113857gt 01113858 1113859 minus P x1 minus x2( 1113857 y1 minus y2( 1113857lt 01113858 1113859

(3)

where P is the probability distribution function After de-riving the above formula we get

τ 2P x1 minus x2( 1113857 y1 minus y2( 1113857gt 01113858 1113859 minus 1

τ isin [minus 1 1]1113896 (4)

Suppose that the Copula function corresponding to(x1 y1) is C1(μ υ) )en the Kendall rank correlationcoefficient τ can be obtained from the corresponding Copulafunction as follows

τ 411139461

011139461

0C1(μ υ)dC1(μ υ) minus 1 (5)

For Spearman rank correlation coefficient ρ suppose thatthe joint distribution function of (x y) is H(x y) themarginal distribution functions of x and y are F(x) andG(y) respectively if H(x y) F(x)G(y) then the randomvariables x and y are independent of each other If x0 isin xy0 isin y then x0 and y0 are independent of each other If(x y) and (x0 y0) are independent of each other then thereis

ρ 3 P x minus x0( 1113857 y minus y0( 1113857gt 01113858 1113859 minus P x minus x0( 1113857 y minus y0( 1113857lt 01113858 11138591113864 1113865

(6)

Suppose that the Copula function of (x y) is C(μ υ)where μ F(x) and υ G(y) the Spearman rank corre-lation coefficient ρ can also be obtained from the corre-sponding Copula function as follows

ρ 1211139461

011139461

0μυ dC(μ υ) minus 3 (7)

22 Recurrent Neural Network (RNN) Recurrent neuralnetwork (RNN) is a kind of neural network which is used toprocess sequence data [33] In different time steps RNNcircularly shares weights and makes connections across timesteps )e RNN structure with only one hidden layer isshown in Figure 1 Compared with multilayer perceptronthe RNN hidden layer is not only connected with the outputlayer but also connected with the hidden layer nodes)at isthe output of the hidden layer is transmitted not only to theoutput layer but also to the hidden layer itself )is makesRNN not only reduce the number of parameters but alsoestablish a nonlinear relationship between the sequence dataat different times )erefore RNN has unique advantages indealing with nonlinear and time series problems

23 Long- and Short-Term Memory (LSTM) NetworkLong- and short-term memory (LSTM) network is an im-portant improvement of RNN It can effectively solve theproblems of RNN gradient disappearance and gradientexplosion and make the network have stronger memoryability In addition LSTM network can also rememberlonger historical data information which not only has an

Mathematical Problems in Engineering 3

external RNN cycle structure but also has an internal ldquoLSTMcellrdquo circulation (self-circulation))erefore LSTM does notsimply impose an element by element nonlinearity on theaffine transformation of input and loop units It is similar tothe common recirculating network and each unit not onlyhas the same input and output structure but also has a gatecontrol unit system with more parameters and control in-formation flow )e structure of LSTM hidden layer isshown in Figure 2 Ctminus 1 is the node state of the previoussequence of hidden layers htminus 1 is the output of the previoussequence of hidden layer nodes xt is the input for the hiddenlayer node of the current sequence Ct is the current se-quence hidden layer node state ht is the output of the hiddenlayer node of the current sequence σ is the nonlinear ac-tivation function of sigmoid and tanh is the hyperbolictangent function

Compared with RNN LSTM network is better atlearning the long-term dependence between sequence datawhile LSTM network has complex structure multi pa-rameters and slow convergence speed

24 Gated Recurrent Unit (GRU) Neural Network Gatedrecurrent unit (GRU) neural network as an importantvariant of LSTM network is the optimization and im-provement of LSTM network It inherits the ability of LSTMnetwork to deal with nonlinear and time series problemsMoreover it not only retains the memory unit function ofLSTM network but also simplifies the structure and reducesthe parameters thus greatly improving the training speed[34] )e structure of the GRU neural network is shown inFigure 3 where Zt represents the update gate state Rtrepresents the reset gate state andHt represents the pendingoutput of the current neuron GRU neural network improvesthe design of ldquogaterdquo on the basis of LSTM network that isthe original cell structure composed of three ldquoGatesrdquo isoptimized to a cell structure composed of two ldquoGatesrdquo Inshort the gating cycle unit is consists of a reset gate and anupdate gate

)e state of reset gate and update gate at time t is definedas rt and Ζt respectively

rt σ Wrxt + Urhtminus 1( 1113857

Ζt σ Wzxt + Uzhtminus 1( 11138571113896 (8)

where W and U are weight matrices and xt is input data )ehidden state ht and the candidate hidden state 1113957ht can becalculated according to the following formula

ht 1 minus Ζt( 1113857htminus 1 + Ζt1113957ht

1113957ht tan h Whxt + Ur rt lowast htminus 1( 11138571113858 1113859

⎧⎨

⎩ (9)

Two different activation functions in equations (8) and(9) can be defined as follows

σ(x) 1

1 + exp(x)

tan h(x) 1 minus exp(2x)

1 + exp(2x)

⎧⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎩

(10)

25 Structure of GRU Neural Network Prediction ModelIn the prediction of reservoir porosity parameters thelogging curves from shallow to deep reflect the formation

times

σ σ tanh

tanh

times

σ

times

Ct-1

ht-1

Ct

Input xt

Output ht

ht

+

Figure 2 Hidden layer structure of LSTM network

times +

timestimes

1-Zr

σ σ

Ct-1 Ct

Input xt

Output ht

tanh

rt zt ht

Figure 3 Schematic diagram of GRU neural network

Input layer

Hidden layer

Output layer

Figure 1 Schematic diagram of RNN structure with only onehidden layer

4 Mathematical Problems in Engineering

characteristics of different geological periods and theirchange trends include important information of physicalparameters [14] When using the traditional statisticalanalysis and the conventional machine learning method topredict the porosity parameters it is easy to destroy thepotential internal relations in the historical series of loggingparameters and reduce the accuracy of the prediction resultsUnlike other machine learning or deep learning methodsGRU neural network has long-term memory ability [35] Bydealing with the long-term dependence between series dataGRU neural network can effectively reduce the influence ofsuch relationships and its internal control mechanism canalso automatically learn time-series features [36] Figure 4shows the structure of the three-layer GRU neural networkmodel

As can be seen from Figure 4 the structure of the GRUneural network model includes input layer hidden layerand output layer in which the hidden layer is the core part ofnetwork structure In the process of training it is necessaryto optimize and adjust the super parameters of the GRUneural network model structure including the number ofhidden layer layers and the number of hidden layer neurons)eoretically the more hidden layers and the number ofneurons the better the model performance the deeper andmore complex the network the higher the prediction ac-curacy However some studies [13 18] have shown that toomany hidden layer numbers and neuron numbers will leadto training difficulties and over fitting which will reduce theprediction accuracy of the model If the network is tooshallow and simple it will easily lead to insufficient fittingand fail to meet the expected requirements )erefore theselection of the number of hidden layers and the number ofneurons is very important to the prediction performance ofthe network We need to balance the learning ability of thenetwork with the complexity of the training and the re-quirements of the prediction accuracy and determine theoptimal value of nodes and number of hidden layersaccording to experience and many experimental results Inaddition the optimization of training parameters such aslearning rate batch size and maximum iteration times canreduce the complexity of the model to a certain extent andimprove the convergence speed and prediction accuracy ofthe model

)e training process of GRU neural network can beroughly divided into three steps as follows

Step 1 input training data into the network calculatethe output of GRU neural network unit from shallowlayer to deep layer along the forward propagation di-rection and obtain the predicted output value corre-sponding to the input data at the current time pointStep 2 the error of each neuron is calculated along theback-propagation direction Error back-propagation ofthe GRU neural network includes propagation alongthe time sequence and the propagation layer by layerbetween the network layersStep 3 the gradient of each weight is calculatedaccording to the error calculated by back-propagation

and the parameters of the weight gradient adjustmentnetwork are calculated by using the learning rate-adaptive optimization algorithm (Adam algorithm)Finally repeat the above steps to iteratively optimizethe network

26 Prediction Model Based on CA-GRU )e modelingprocess of the combination forecasting model based on CA-GRU is shown in Figure 5 which mainly includes the fol-lowing six steps

Step 1 the obtained logging curves and porosity pa-rameters are used as the database )e Kendall rankcorrelation coefficient τ Spearman rank correlationcoefficient ρ and Pearson linear correlation coefficientP based on Copula function are used to quantitativelycalculate and analyze the correlation degree betweenthem and the logging curves sensitive to the predictionparameters are selected to form new sample dataStep 2 the new sample data are standardized anddivided into the training set and test set according to acertain proportionStep 3 the GRU neural network model is constructedfor porosity prediction and the network parameters areinitialized )e number of network layers and thenumber of hidden layer neurons are determinedaccording to the experimentStep 4 during the training process the networkstructure is continuously optimized until the trainingerror of the model reaches the preset target and thenthe model is savedStep 5 test the trained GRU neural network modelwith the segmented test set and deformalize the pre-dicted value of the model to obtain the predicted valueof porosity parameters corresponding to the actualvalueStep 6 the predicted value is compared with the actualvalue and the error is analyzed )e prediction

h0

x0 x1

h1

GRU3 GRU3 GRU3

GRU3 GRU3 GRU3

GRU3 GRU3 GRU3

hellip

hellip

hellip

hellip

hellip

Hiddenlayer

Input layer

Output layer

ht

xt

Figure 4 Structure diagram of three-layer GRU neural networkmodel

Mathematical Problems in Engineering 5

performance of the model is evaluated according to thecorresponding evaluation index

3 Data Processing and Analysis

31Data Preparation As shown in Figure 6(a) Ordos Basinis a large superimposed basin in the central part of Chinawith huge oil and gas resources In China it is considered asone of the basins with the greatest potential for oil and gasreserves and production growth in China and it is a pet-roliferous basin with stratigraphic and lithologic traps as itsmain structural traps )e accumulated exploration ofnatural gas reserves in Ordos Basin is about 27times1012m3and the geological resources to be explored are about125times1012m3 which indicates that Ordos Basin is still in theearly stage of exploration In addition Ordos Basin is alsorich in novel oil and gas resources such as coalbed methaneshale gas and tight sandstone gas Additionally the amountof geology resources of coalbedmethane in the basin is about986times1012m3 of which the recoverable amount is about179times1012m3 of recoverable )e amount of geologicalresources of shale gas in the basin is about 53times1012m3 ofwhich the recoverable reserves are about 291times 1012m3 )egeological resources of tight sandstone gas in the basin areabout 784times1012m3 of which the recoverable amount isabout 353times1012m3 [18] For conventional or unconven-tional oil and gas resources it can be seen that Ordos Basinhas great exploration potential)erefore a fast and accuratemethod is needed to obtain porosity information which isan important parameter for oil and gas exploration

As shown in Figure 6(b) well E1 is regarded as shale gaswell in the northeastern part of Ordos Basin China It is avery important step for the database including logging datato prepare for the construction of the model In this studyavailable well logs of well E1 include the spontaneous po-tential (SP) compensated neutron log (CNL) compressionalwave slowness (DTC) resistivity (RT) Uranium (U) naturalgamma-ray (GR) bulk density (DEN) Potassium (K) and)orium (TH) Table 1 shows the summary of the recordedlogging parameters data for well E1

Figure 7 presents the logging parameters plot for well E1)is study is very vital and meaningful to oil and gas ex-ploration areas since it is very important to obtain porosityinformation based on logging parameters in reservoirevaluation

32 Data Analysis For the model it is very important toselect suitable logging input when preparing for accurateporosity prediction In this study Kendall rank correlationcoefficient τ Spearman rank correlation coefficient ρ andPearson linear correlation coefficient R based on Copulafunction are used to quantitatively calculate the correlationbetween logging parameters and porosity parameters )ecomparison of absolute values of three correlation coeffi-cients is shown in Figure 8

As can be seen from Figure 8 Pearson correlationanalysis often ignores the nonlinear correlation betweenlogging parameters and porosity In the correlation analysisbetween logging data and porosity the correlation coeffi-cients of DTC DEN and CNL with porosity are relatively

Start

Data

Correlation analysis

Data preprocessing

Training set and testing set

Testing set Training set

Model parameter initialization

Devising GRU model

Training GRU model

Achieving expected results

Save model

Adjusting parameter

Testing the trained model

Get the prediction results

No

Yes

Figure 5 Modeling flow chart of porosity parameter prediction model based on CA-GRU

6 Mathematical Problems in Engineering

high indicating that DTC DEN CNL and porosity have astrong correlation with porosity Although the Pearsoncorrelation coefficient of GR and porosity is relatively lowhowever the correlation analysis method based on Copulafunction obtains higher τ and ρ which shows that the linearcorrelation between GR and porosity is low but the non-linear correlation is high and there is a strong nonlinearcorrelation between them To sum up linear correlation andnonlinear correlation analysis methods are used to optimizeconventional logging parameters in this study Finally fourlogging parameters DTC DEN CNL and GR are selectedas independent variables of network modeling and a po-rosity prediction model is established

4 Results and Discussion

As mentioned above a set of 5165 data points from well E1 hasbeen used to build themodel)is set of data is divided into testand training subsets by depth In this study the training subsetconsists of the first 3874 of all data while the test subset oftesting consists of the rest data points In comparison RNNGRU CA-GRU and multiple linear regression (MLR) modelswere selected and applied to predict porosity

In the beginning all data are standardized ranging fromzero to one with the following equation (11) is used to testand train data sets in the RNN GRU and CA-GRU modelsIn addition the convergence of neural nets may be

BeijingCityStudy area

0 800 1600km 0 38000 76000

km

Weibei uplif

Qin yang

Hua chiBelt

Margin

Western

Ji yuan

Ding bian Jin bian

Yu lin

Wu shen qi

Well E1 Shen mu

Belt

Fold

Jinxi

Yimengi uplift

Hang jin qiErdos shi

Yin chuan

35deg

36deg

37deg

38deg

39deg

40deg

35deg

36deg

37deg

38deg

39deg

40deg

107deg 108deg 109deg 110deg

107deg 108deg 109deg 110deg

Basin boundary faultTectonic element boundaryWell E1City

N N

(a) (b)

Figure 6 (a))e position of Ordos Basin in China (b) the structural belts of Ordos Basin and the sampling well near the Yimeng uplift [15]

Table 1 Summary of the recorded logging parameters for well E1

SP (mV) GR (API) TDC (μsft) RT (Ωmiddotm) U (ppm) K () TH (ppm) DEN (gcm3) CNL ()Miv 6764 2065 5442 944 209 040 171 245 739Mav 7498 6675 9156 5201 736 233 1190 272 2727Average 7217 4404 7433 2329 439 108 543 257 1618SD 138 774 689 709 108 031 177 004 378Mivminimum value Mavmaximum value SD standard deviation

Mathematical Problems in Engineering 7

guaranteed by the preprocessing which made the calculatingspeed of network methods increase

xlowast

x minus xmin

xmax minus xmin (11)

where xlowast is the normalized data x is the original data themaximum and minimum of the original dataset are rep-resented by xmax and xmin respectively

In the study various concepts related to statistics includingcorrelation coefficient (R) mean absolute error (MAE) variance

accounted for (VAF) and root mean square error (RMSE) areused to compare performance prediction In addition theseperformance indicators could provide a sense of how great theperformance of the prediction model is in terms of the actualvalue )e following equations give the above standard tools

R criteria can be described as follows

R

1 minus1113936

ni1 (1113954p minus p)i1113858 1113859

2

1113936ni1 (1113954p)

2i minus (1n) 1113936

ni1 (1113954p)

2i

11139741113972

(12)

MAE criteria may be depicted as follows

MAE 1n

1113944

n

i1(1113954p minus p) (13)

VAF is usually used to assess the accuracy of one modelthrough making a comparison between the assessed valuesand the evaluated output of the model and VAF criteria canbe computed with this equation as follows

VAF 1 minusvar (1113954p)i minus (p)i1113858 1113859

var (1113954p)i1113858 11138591113888 1113889 times 100 (14)

)e RMSE is traditionally applied to monitor the qualityof error function of the model )e performance of themodel increases with RMSE decreasing RMSE criteria canbe computed with this equation as follows

RMSE

1n

1113944

n

i1(1113954p)i minus (p)i1113858 1113859

2

11139741113972

(15)

where for equations (12)ndash(15) p is the measured porosity 1113954p

denotes the assessed porosity while n is the amount of testingdata points

0 5 10 70 75 40 80 2 4 6 8 60 90 24 26 10 20 0 1 2 0 20 40 60

0 5 10 70 75 40 80 2 4 6 8 60 90 24 26 10 20 0 1 2 0 20 40 60

3440

3420

3400

3380

3360

3340

Dep

th (m

)TH (10ndash6) U (10ndash6)SP (mv) GR (API) DTC (μs) DEN (gcm3) CNL () K () RT (Ωm)

Figure 7 Logging parameters plot for well E1

SP GR U DTC DEN CNL KR TTHLogging parameters

0

005

01

015

02

025

03

035

04

045

Corr

elat

ion

coef

ficie

nt

ρR

T

Figure 8 Correlation between logging data and porosityparameter

8 Mathematical Problems in Engineering

In the study of machine learning robustness is a keycharacteristic Considering that the method of selectingtraining and testing data sets avoids the impact on the ro-bustness of GRU we randomly selected ten training andtesting set from 5165 data points in well E1 )e randomlyselected sets are shown in Figure 9(a) where the colorrepresents the corresponding depth of the sample For eachof these ten cases GRU can be modeled and its RMSE of thetraining set and the testing set can be calculated respectivelyand the results are shown in Figure 9(b)

As shown in Figure 9(b) the difference of trainingsamples will lead to inconsistent prediction errors of GRU)erefore it can be concluded that the selection of trainingsamples will bring about changes in GRU prediction errorsIn order to further analyze the influence of different trainingsamples of GRU forecasting error we use statisticalmethods )e above ten cases of RMSE data have carried onthe single factor analysis of variance (the confidence level of5) and the analysis results are shown in Table 2P 0569gt 005 indicating that there are no significantdifferences between RMSE data of 10 cases )roughcomparative analysis we can safely conclude that differenttraining samples had differences in the prediction error ofGRU but there is no significant difference in statistics Itmeans from a statistical point of view the selection methodof training and testing samples had little influence on therobustness of GRU)erefore this study divides the trainingsample set and the test set according to the depth orderwhich conforms to the statistical law and is feasible andpractical

As we all know determining the parameters of the neuralnetwork model is the premise of the successful constructionof the network model In this study adaptive learning rateoptimization algorithm (Adam algorithm) is used to opti-mize the network Adam algorithm combines the advantagesof RMSProp algorithm and AdaGrad algorithm and can

design the independent adaptive learning rate for differentparameters According to the test the best learning rate is0005 the batch is 10 and the time step is 50 According tothe previous studies [18 37] on neural networks the numberof hidden layers and hidden layer nodes has great influenceon the prediction performance of neural networks Fordifferent research fields the number of hidden layers andhidden layer nodes are different Choosing the optimalnumber of hidden layers and hidden layer nodes is the key toensure the prediction accuracy of neural network model)erefore based on the ergodic optimization thinking thisstudy sets the hidden layer value range as [1 10] and thehidden layer node value range as [1 100] By comparing theroot mean square error (RMSE) of the model under differenthidden layers and hidden layer nodes the hidden layer valueand hidden layer node value of the model under the min-imum root mean square error (RMSE) are obtained )eoptimization test results are shown in Figure 10 It can beseen from the figure that too many or too few hidden layersand neurons in the network will lead to drastic changes inroot mean square error (RMSE) of prediction resultsresulting in the decrease of prediction accuracy )roughtraversal optimization the optimal number of hidden layersand nodes for this study is 3 and 41 respectively

According to the above correlation analysis results fourlogging parameters DTC DEN CNL and GR which havestrong correlation with porosity are selected as themodeling

Table 2 )e results of one-way ANOVA

Sum ofsquares df Mean

square F P value

Betweengroups 0077 9 0009 0883 0569

Within groups 0097 10 0010Total 0174 19

3340

3360

3380

3400

3420

3440

Dep

th (m

)

Case

1

Case

3

Case

4

Case

5

Case

6

Case

7

Case

8

Case

9

Case

10

Case

2

Random sets

500045004000350030002500200015001000

500Se

eds

(a)

002040608

112141618

2

RMSE

Case

2

Case

3

Case

4

Case

5

Case

6

Case

7

Case

8

Case

9

Case

10

Case

1

Random sets

TrainingTesting

(b)

Figure 9 Results of ELM with randomly selected training set (a) Ten cases of randomly selected sets (b) )e RMSE of training and testingfor each case

Mathematical Problems in Engineering 9

independent variables of the porosity prediction modelFigure 11 shows the comparison between the actual andpredicted porosity of RNN GRU California-GRU and

MLR models with the depth of E1 well as well as the localenlargement operation It shows that the results of MLRmodels are quite different from the measured porosity while

2

18

16

14

12

2181614121

RMSE

108

64

2

Number of hidden layers Number of hidden layer nodes20

40 6080

100

Figure 10 Model performance under different hidden layers and node numbers

Porosity ()0 5 10 15 20 25

3330

3360

3390

3420

3450

Dep

th (m

)

3370

3375

3380

3385

3390

MeasuredPredicted

0 5 10

(a)

Porosity ()0 5 10 15 20 25

3330

3360

3390

3420

3450

Dep

th (m

)3370

3375

3380

3385

3390

MeasuredPredicted

0 5 10

(b)

Porosity ()0 5 10 15 20 25

3330

3360

3390

3420

3450

Dep

th (m

)

3370

3375

3380

3385

3390

MeasuredPredicted

0 5 10

(c)

Porosity ()0 5 10 15 20 25

3330

3360

3390

3420

3450

Dep

th (m

)

3370

3375

3380

3385

3390

MeasuredPredicted

0 5 10

(d)

Figure 11 A comparison between measured and predicted porosity with RNN GRU CA-GRU and MLR models respectively in well E1(a) A comparison between measured and RNN predicted porosity (b) A comparison between measured and GRU predicted porosity (c) Acomparison between measured and CA-GRU predicted porosity (d) A comparison between measured and MLR predicted porosity

10 Mathematical Problems in Engineering

the results of RNN GRU and California-GRU models arevery consistent with the actual situation

Eventually the statistical indicators mentioned previ-ously were also employed to carry out this comparison andoutcomes of this comparison were presented in Table 3What can be shown in Table 3 was that the three kinds ofporosity prediction models based on deep learning methodwere far superior to those established with the traditionalmultiple linear regression (MLR) method in their predictionaccuracy And by comparing the prediction precision of thethree types deep learning models (RNN GRU and CA-GRU) established in this study it can be evidently shownthat the GRU and CA-GRU models were superior to RNNmodel in prediction precision among which CA-GRU tookthe best precision due to the highest R of 09423 and VAF of887578 and the lowestMAE of 02101 and RMSE of 11412)rough the comparison between the prediction precision ofCA-GRU and GRU there was just slight discrepancy be-tween their precision

To conclude both CA-GRU and GRU models couldprovide a successful porosity prediction performance )eCA-GRU model showed a higher precision in predictingporosity and from Table 3 which we can conclude that CA-GRU modeled in this study took a higher efficiency inpredicting the porosity with its high precision And theverification results in well E1 prove that the CA-GRUmodelwith optimal inputs could be regarded as an efficient tool forpredicting the porosity in particular in the area where thehigh precision porosity data is required

5 Conclusions

Logging parameters reflect the sedimentary characteristics ofacoustic discharge and electric in different geological pe-riods Porosity is the characteristic response of differentformations and has strong time series characteristics Di-rectly predicting porosity by using logging parameters caneffectively reduce the high cost of using special methods suchas rock physical analysis and nuclear magnetic resonancewhich can provide accurate and low-cost decision-makingbasis for the petroleum exploration and development in oil-gas field Deep learning technology can find the nonlinearrelationship between different parameters completely fromthe data which is very suitable for solving the problem ofnonlinear geophysical interpretation It can not only makefull use of the response characteristics of various loggingparameters to different formations at the same time but alsocan get rid of the limitations of linear prediction of tradi-tional empirical formula

Considering the time series characteristics of loggingparameters and porosity parameters a reservoir porosityprediction method based on GRU neural network isproposed in this study By using correlation analysismethod based on Copula function to select sensitivelogging parameters and then uses GRU neural network tobuild prediction model which not only considers theinfluence of strong correlation sample data on the pre-diction of porosity parameters but also takes into accountthe nonlinear mapping relationship between porosityparameters and logging curves as well as the changetrend and correlation of logging information with depth)e correlation measure method based on Copulafunction can optimize the well logging curves which aresensitive to porosity parameters reduce the dimension ofmodel input eliminate the redundancy between vari-ables and improve the overall prediction performance ofthe model )e research results show that the neuralnetwork model of GRU neural network has strong featureextraction ability and can effectively extract deep char-acteristics reflecting porosity parameters from loggingdata Compared with deep learning models such asmultiple linear regression analysis it can predict porosityparameters more accurately and has strong robustnessand anti-interference ability )is study provides a newidea for accurate interpretation of logging data in oil-gasfield exploration and development

Abbreviations

DNN Deep neural networkGRU Gated recurrent unitNMR Nuclear magnetic resonanceCNN Convolutional neural networkRNN Recurrent neural networkSAE Stack autoencoderU UraniumDEN Bulk densityTH )oriumMAE Mean absolute errorRMSE Root mean square errorRNN Recurrent neural networkLSTM Long short-term memoryCA Correlation analysisMLR Multiple linear regressionSP Spontaneous potentialCNL Compensated neutron logDTC Compressional wave slownessRT ResistivityGR Natural gamma-rayK PotassiumR Correlation coefficientVAF Variance accounted forCA-GRU Correlation analysis-gated recurrent unit

Data Availability

)e data used to support the findings of this study areavailable from the corresponding author upon request

Table 3 Comprising CA-GRU model with RNN GRU and MLRmodels by using four performance indicators including correlationcoefficient (R) mean absolute error (MAE) variance accounted for(VAF) as well as root mean square error (RMSE)

Method R MAE VAF() RMSERNN 09034 02728 815798 14952GRU 0928 02303 861089 12643CA-GRU 09423 02101 887578 11412MLR 08489 09035 717449 22589

Mathematical Problems in Engineering 11

Conflicts of Interest

)e authors declare that there are no conflicts of interest

Acknowledgments

)e authors are grateful for the financial support providedby the National Natural Science Foundation of China(41504041) )e authors would also like to thank XiaoyanDeng for assisting in preparation of this manuscript par-ticularly grateful to her for checking grammar style andsyntax in the manuscript

References

[1] S Singh A I Kanli and S Sevgen ldquoA general approach forporosity estimation using artificial neural network method acase study from Kansas gas fieldrdquo Studia Geophysica etGeodaetica vol 60 no 1 pp 130ndash140 2016

[2] Z Zhong and T R Carr ldquoApplication of a new hybrid particleswarm optimization-mixed kernels function-based supportvector machine model for reservoir porosity prediction Acase study in Jacksonburg-Stringtown oil field West VirginiaUSArdquo Interpretation vol 7 no 1 pp T97ndashT112 2019

[3] A Bakhorji H Mustafa and S Aramco Rock PhysicsModeling and Analysis of Elastic Signatures for Intermediate toLow Porosity Sandstones pp 1ndash5 Society of ExplorationGeophysicists Houston TX USA 2012

[4] S Tao Z Pan S Chen and S Tang ldquoCoal seam porosity andfracture heterogeneity of marcolithotypes in the FanzhuangBlock southern Qinshui Basin Chinardquo Journal of NaturalGas Science and Engineering vol 66 pp 148ndash158 2019

[5] S Tao X Zhao D Tang C Deng Q Meng and Y Cui ldquoAmodel for characterizing the continuous distribution of gasstoring space in low-rank coalsrdquo Fuel vol 233 pp 552ndash5572018

[6] S Tao S Chen D Tang X Zhao H Xu and S Li ldquoMaterialcomposition pore structure and adsorption capacity of low-rank coals around the first coalification jump A case ofeastern Junggar Basin Chinardquo Fuel vol 211 pp 804ndash8152018

[7] C Zhang X Wang and L Q Zhu ldquoEstimation of totalporosity in shale formations from element capture loggingand conventional logging datardquo Arabian Journal of Geo-sciences vol 11 no 11 p 264 2018

[8] A Khaksar and C M Griffiths ldquolowastPorosity form sonic log ingas-bearing shaly sandstones field data versus empiricalequationsrdquo Exploration Geophysics vol 29 no 3-4pp 440ndash446 1998

[9] C Guo C Zhang and L Zhu ldquoPredicting the total porosity ofshale gas reservoirsrdquo Petroleum Science and Technologyvol 35 no 10 pp 1022ndash1031 2017

[10] H L Crow R J Enkin J B Percival and H A J RussellldquoDownhole nuclear magnetic resonance logging in glacio-marine sediments near Ottawa Ontario Canadardquo NearSurface Geophysics vol 18 no 6 p 591 2020

[11] J O Parra C Hackert M Bennett and H A CollierldquoPermeability and porosity images based on NMR sonic andseismic reflectivity application to a carbonate aquiferrdquo CeLeading Edge vol 22 no 11 pp 1102ndash1108 2003

[12] E P Leite and A C Vidal ldquo3D porosity prediction fromseismic inversion and neural networksrdquo Computers amp Geo-sciences vol 37 no 8 pp 1174ndash1180 2011

[13] X Shi G Liu Y Cheng et al ldquoBrittleness index prediction inshale gas reservoirs based on efficient network modelsrdquoJournal of Natural Gas Science and Engineering vol 35pp 673ndash685 2016

[14] P Wang S Peng and T He ldquoA novel approach to totalorganic carbon content prediction in shale gas reservoirs withwell logs data Tonghua Basin Chinardquo Journal of Natural GasScience and Engineering vol 55 pp 1ndash15 2018

[15] P Wang and S Peng ldquoA new scheme to improve the per-formance of artificial intelligence techniques for estimatingtotal organic carbon from well logsrdquo Energies vol 11 no 4p 747 2018

[16] F S Feng P Wang Z Wei et al ldquoA new method for pre-dicting the permeability of sandstone in deep reservoirsrdquoGeofluids vol 2020 Article ID 8844464 16 pages 2020

[17] A A Talkhestani ldquoPrediction of effective porosity fromseismic attributes using locally linear model tree algorithmrdquoGeophysical Prospecting vol 63 no 3 pp 680ndash693 2015

[18] P Wang and S Peng ldquoOn a new method of estimating shearwave velocity from conventional well logsrdquo Journal of Pe-troleum Science and Engineering vol 180 pp 105ndash123 2019

[19] F S T Haklidir and M Haklidir ldquoPrediction of reservoirtemperatures using hydrogeochemical data western anatoliageothermal systems (Turkey) a machine learning approachrdquoNatural Resources Research vol 29 no 4 pp 2333ndash23462020

[20] A A Mahmoud S Elkatatny and D Al Shehri ldquoApplicationof machine learning in evaluation of the static youngrsquosmodulus for sandstone formationsrdquo Sustainability vol 12no 5 p 1880 2020

[21] M He H Gu and H Wan ldquoLog interpretation for lithologyand fluid identification using deep neural network combinedwith MAHAKIL in a tight sandstone reservoirrdquo Journal ofPetroleum Science and Engineering vol 194 p 107498 2020

[22] Y Bengio ldquoLearning deep architectures for AIrdquo Foundationsand Trends in Machine Learning vol 2 no 1 pp 1ndash127 2009

[23] G E Hinton S Osindero and Y-W Teh ldquoA fast learningalgorithm for deep belief netsrdquo Neural Computation vol 18no 7 pp 1527ndash1554 2006

[24] L C Mao and J Zhao ldquoA survey on the new generation ofdeep learning in image processingrdquo IEEE Access vol 7pp 172231ndash172263 2019

[25] R Udendhran M Balamurugan A Suresh andR Varatharajan ldquoEnhancing image processing architectureusing deep learning for embedded vision systemsrdquo Micro-processors and Microsystems vol 76 p 103094 2020

[26] R J G Van Sloun R Cohen and Y C Eldar ldquoDeep learningin ultrasound imagingrdquo Proceedings of the IEEE vol 108no 1 pp 11ndash29 2020

[27] J Liu C Wu and J Wang ldquoGated recurrent units basedneural network for time heterogeneous feedback recom-mendationrdquo Information Sciences vol 423 pp 50ndash65 2018

[28] Y Hao Y Sheng and J Wang ldquoVariant gated recurrent unitswith encoders to preprocess packets for payload-aware in-trusion detectionrdquo IEEE Access vol 7 pp 49985ndash49998 2019

[29] F T Dezaki Z B Liao C Luong et al ldquoCardiac phasedetection in echocardiograms with densely gated recurrentneural networks and global extrema lossrdquo IEEE Transactionson Medical Imaging vol 38 no 8 pp 1821ndash1832 2019

[30] ZWang Y DongW Liu and ZMa ldquoA novel fault diagnosisapproach for chillers based on 1-D convolutional neuralnetwork and gated recurrent unitrdquo Sensors vol 20 no 9p 2458 2020

12 Mathematical Problems in Engineering

[31] H Colonius ldquoAn invitation to coupling and copulas withapplications to multisensory modelingrdquo Journal of Mathe-matical Psychology vol 74 pp 2ndash10 2016

[32] N Uyttendaele ldquoOn the estimation of nested Archimedeancopulas a theoretical and an experimental comparisonrdquoComputational Statistics vol 33 no 2 pp 1047ndash1070 2018

[33] P Vincent-Lamarre M Calderini and J-P )iviergeldquoLearning long temporal sequences in spiking networks bymultiplexing neural oscillationsrdquo Frontiers in ComputationalNeuroscience vol 14 p 78 2020

[34] W Huang Y Li and Y Huang ldquoDeep Hybrid NeuralNetwork and Improved Differential Neuroevolution forChaotic Time Series Predictionrdquo IEEE Access vol 8pp 159552ndash159565 2020

[35] Y Wu M Yuan S Dong L Lin and Y Liu ldquoRemaininguseful life estimation of engineered systems using vanillaLSTM neural networksrdquo Neurocomputing vol 275 pp 167ndash179 2018

[36] H Yan Y Qin S Xiang Y Wang and H Chen ldquoLong-termgear life prediction based on ordered neurons LSTM neuralnetworksrdquo Measurement vol 165 p 108205 2020

[37] X Shi J Wang G Liu L Yang X Ge and S Jiang ldquoAp-plication of extreme learning machine and neural networks intotal organic carbon content prediction in organic shale withwire line logsrdquo Journal of Natural Gas Science and Engi-neering vol 33 pp 687ndash702 2016

Mathematical Problems in Engineering 13

Page 3: Research Article On a Deep Learning Method of Estimating ...theporosityandcoreanalysisdata,andtheresultsaregreatly affected by human factors. Nuclear magnetic resonance (NMR) porosity

selected as the input of modeling However Pearson linearcorrelation coefficient is often used to evaluate the corre-lation [13] Pearson correlation coefficient analysis onlyfocuses on linear correlation and often ignores the non-linear relationship between porosity parameters and log-ging curves )erefore when the relationship betweenlogging data and prediction parameters is nonlinear it isnot reliable to measure the correlation with the linearcorrelation coefficient If Copula function is used to analyzethe correlation between logging data and prediction pa-rameters the influence of the nonlinear correlation be-tween parameters can be weakened to a certain extentBased on Copula function and its derived correlation indexthe nonlinear and asymmetric correlation between loggingcurve and predicted physical parameters can be accuratelymeasured )erefore Kendall rank correlation coefficient τand Spearman rank correlation coefficient ρ based onCopula function are used to quantitatively analyze thecorrelation between logging curves and porosity parame-ters Among them Kendall rank correlation coefficient τcan be used to measure the consistency change degreebetween logging parameters and porosity parameters andspearman rank correlation coefficient ρ can be used tomeasure the monotonic correlation degree between loggingcurves and porosity parameters )e calculation results arecompared with those calculated by Pearson linear corre-lation coefficient

Copula function theory accurately describes the corre-lation between nonlinear and asymmetric variables )edetails are as follows suppose that the marginal probabilitydistribution functions of an n-valued random variabledistribution function (H) are F(x1) F(x2) F(xn) re-spectively Where x1 x2 xn is an n-dimensional randomvariable there is a Copula function (C) which satisfies thefollowing conditions

H x1 x2 xn( 1113857 C F1 x1( 1113857 F2 x2( 1113857 Fn xn( 11138571113858 1113859 (1)

where N dimensional function (t minus Copula) is defined asfollows [31 32]

c(μ ρ υ) |ρ|minus 12

Γ(υ + N2)[Γ(υ2)]Nminus 1 1 + 1υξminus 1ρminus 1ξ1113872 1113873

minus υ+N2

[Γ(υ + 12)]N

1113937Nn1 1 + ξ2υ1113872 1113873

minus υ+12

⎧⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎩

(2)

where ρ is the N-order symmetric positive definite matrix ofall elements on the diagonal |ρ|is the determinant of matrixρ Γ(middot) is the distribution function of gamma distribution υis the degree of freedom of t minus Copula function with Nvariables ζ [tυ(μ1) tυ(μ2) tυ(μN)] tυ(middot) is the inversefunction of the univariate t distribution with υ degrees offreedom and μi(i 1 2 N) is the input independentvariable

Kendall rank correlation coefficient τ is used to measurethe degree of consistency change between x and y Suppose(x1 y1) and (x2 y2) are independent and identically dis-tributed vectors x1 x2 isin x y1 y2 isin y )en there is

τ P x1 minus x2( 1113857 y1 minus y2( 1113857gt 01113858 1113859 minus P x1 minus x2( 1113857 y1 minus y2( 1113857lt 01113858 1113859

(3)

where P is the probability distribution function After de-riving the above formula we get

τ 2P x1 minus x2( 1113857 y1 minus y2( 1113857gt 01113858 1113859 minus 1

τ isin [minus 1 1]1113896 (4)

Suppose that the Copula function corresponding to(x1 y1) is C1(μ υ) )en the Kendall rank correlationcoefficient τ can be obtained from the corresponding Copulafunction as follows

τ 411139461

011139461

0C1(μ υ)dC1(μ υ) minus 1 (5)

For Spearman rank correlation coefficient ρ suppose thatthe joint distribution function of (x y) is H(x y) themarginal distribution functions of x and y are F(x) andG(y) respectively if H(x y) F(x)G(y) then the randomvariables x and y are independent of each other If x0 isin xy0 isin y then x0 and y0 are independent of each other If(x y) and (x0 y0) are independent of each other then thereis

ρ 3 P x minus x0( 1113857 y minus y0( 1113857gt 01113858 1113859 minus P x minus x0( 1113857 y minus y0( 1113857lt 01113858 11138591113864 1113865

(6)

Suppose that the Copula function of (x y) is C(μ υ)where μ F(x) and υ G(y) the Spearman rank corre-lation coefficient ρ can also be obtained from the corre-sponding Copula function as follows

ρ 1211139461

011139461

0μυ dC(μ υ) minus 3 (7)

22 Recurrent Neural Network (RNN) Recurrent neuralnetwork (RNN) is a kind of neural network which is used toprocess sequence data [33] In different time steps RNNcircularly shares weights and makes connections across timesteps )e RNN structure with only one hidden layer isshown in Figure 1 Compared with multilayer perceptronthe RNN hidden layer is not only connected with the outputlayer but also connected with the hidden layer nodes)at isthe output of the hidden layer is transmitted not only to theoutput layer but also to the hidden layer itself )is makesRNN not only reduce the number of parameters but alsoestablish a nonlinear relationship between the sequence dataat different times )erefore RNN has unique advantages indealing with nonlinear and time series problems

23 Long- and Short-Term Memory (LSTM) NetworkLong- and short-term memory (LSTM) network is an im-portant improvement of RNN It can effectively solve theproblems of RNN gradient disappearance and gradientexplosion and make the network have stronger memoryability In addition LSTM network can also rememberlonger historical data information which not only has an

Mathematical Problems in Engineering 3

external RNN cycle structure but also has an internal ldquoLSTMcellrdquo circulation (self-circulation))erefore LSTM does notsimply impose an element by element nonlinearity on theaffine transformation of input and loop units It is similar tothe common recirculating network and each unit not onlyhas the same input and output structure but also has a gatecontrol unit system with more parameters and control in-formation flow )e structure of LSTM hidden layer isshown in Figure 2 Ctminus 1 is the node state of the previoussequence of hidden layers htminus 1 is the output of the previoussequence of hidden layer nodes xt is the input for the hiddenlayer node of the current sequence Ct is the current se-quence hidden layer node state ht is the output of the hiddenlayer node of the current sequence σ is the nonlinear ac-tivation function of sigmoid and tanh is the hyperbolictangent function

Compared with RNN LSTM network is better atlearning the long-term dependence between sequence datawhile LSTM network has complex structure multi pa-rameters and slow convergence speed

24 Gated Recurrent Unit (GRU) Neural Network Gatedrecurrent unit (GRU) neural network as an importantvariant of LSTM network is the optimization and im-provement of LSTM network It inherits the ability of LSTMnetwork to deal with nonlinear and time series problemsMoreover it not only retains the memory unit function ofLSTM network but also simplifies the structure and reducesthe parameters thus greatly improving the training speed[34] )e structure of the GRU neural network is shown inFigure 3 where Zt represents the update gate state Rtrepresents the reset gate state andHt represents the pendingoutput of the current neuron GRU neural network improvesthe design of ldquogaterdquo on the basis of LSTM network that isthe original cell structure composed of three ldquoGatesrdquo isoptimized to a cell structure composed of two ldquoGatesrdquo Inshort the gating cycle unit is consists of a reset gate and anupdate gate

)e state of reset gate and update gate at time t is definedas rt and Ζt respectively

rt σ Wrxt + Urhtminus 1( 1113857

Ζt σ Wzxt + Uzhtminus 1( 11138571113896 (8)

where W and U are weight matrices and xt is input data )ehidden state ht and the candidate hidden state 1113957ht can becalculated according to the following formula

ht 1 minus Ζt( 1113857htminus 1 + Ζt1113957ht

1113957ht tan h Whxt + Ur rt lowast htminus 1( 11138571113858 1113859

⎧⎨

⎩ (9)

Two different activation functions in equations (8) and(9) can be defined as follows

σ(x) 1

1 + exp(x)

tan h(x) 1 minus exp(2x)

1 + exp(2x)

⎧⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎩

(10)

25 Structure of GRU Neural Network Prediction ModelIn the prediction of reservoir porosity parameters thelogging curves from shallow to deep reflect the formation

times

σ σ tanh

tanh

times

σ

times

Ct-1

ht-1

Ct

Input xt

Output ht

ht

+

Figure 2 Hidden layer structure of LSTM network

times +

timestimes

1-Zr

σ σ

Ct-1 Ct

Input xt

Output ht

tanh

rt zt ht

Figure 3 Schematic diagram of GRU neural network

Input layer

Hidden layer

Output layer

Figure 1 Schematic diagram of RNN structure with only onehidden layer

4 Mathematical Problems in Engineering

characteristics of different geological periods and theirchange trends include important information of physicalparameters [14] When using the traditional statisticalanalysis and the conventional machine learning method topredict the porosity parameters it is easy to destroy thepotential internal relations in the historical series of loggingparameters and reduce the accuracy of the prediction resultsUnlike other machine learning or deep learning methodsGRU neural network has long-term memory ability [35] Bydealing with the long-term dependence between series dataGRU neural network can effectively reduce the influence ofsuch relationships and its internal control mechanism canalso automatically learn time-series features [36] Figure 4shows the structure of the three-layer GRU neural networkmodel

As can be seen from Figure 4 the structure of the GRUneural network model includes input layer hidden layerand output layer in which the hidden layer is the core part ofnetwork structure In the process of training it is necessaryto optimize and adjust the super parameters of the GRUneural network model structure including the number ofhidden layer layers and the number of hidden layer neurons)eoretically the more hidden layers and the number ofneurons the better the model performance the deeper andmore complex the network the higher the prediction ac-curacy However some studies [13 18] have shown that toomany hidden layer numbers and neuron numbers will leadto training difficulties and over fitting which will reduce theprediction accuracy of the model If the network is tooshallow and simple it will easily lead to insufficient fittingand fail to meet the expected requirements )erefore theselection of the number of hidden layers and the number ofneurons is very important to the prediction performance ofthe network We need to balance the learning ability of thenetwork with the complexity of the training and the re-quirements of the prediction accuracy and determine theoptimal value of nodes and number of hidden layersaccording to experience and many experimental results Inaddition the optimization of training parameters such aslearning rate batch size and maximum iteration times canreduce the complexity of the model to a certain extent andimprove the convergence speed and prediction accuracy ofthe model

)e training process of GRU neural network can beroughly divided into three steps as follows

Step 1 input training data into the network calculatethe output of GRU neural network unit from shallowlayer to deep layer along the forward propagation di-rection and obtain the predicted output value corre-sponding to the input data at the current time pointStep 2 the error of each neuron is calculated along theback-propagation direction Error back-propagation ofthe GRU neural network includes propagation alongthe time sequence and the propagation layer by layerbetween the network layersStep 3 the gradient of each weight is calculatedaccording to the error calculated by back-propagation

and the parameters of the weight gradient adjustmentnetwork are calculated by using the learning rate-adaptive optimization algorithm (Adam algorithm)Finally repeat the above steps to iteratively optimizethe network

26 Prediction Model Based on CA-GRU )e modelingprocess of the combination forecasting model based on CA-GRU is shown in Figure 5 which mainly includes the fol-lowing six steps

Step 1 the obtained logging curves and porosity pa-rameters are used as the database )e Kendall rankcorrelation coefficient τ Spearman rank correlationcoefficient ρ and Pearson linear correlation coefficientP based on Copula function are used to quantitativelycalculate and analyze the correlation degree betweenthem and the logging curves sensitive to the predictionparameters are selected to form new sample dataStep 2 the new sample data are standardized anddivided into the training set and test set according to acertain proportionStep 3 the GRU neural network model is constructedfor porosity prediction and the network parameters areinitialized )e number of network layers and thenumber of hidden layer neurons are determinedaccording to the experimentStep 4 during the training process the networkstructure is continuously optimized until the trainingerror of the model reaches the preset target and thenthe model is savedStep 5 test the trained GRU neural network modelwith the segmented test set and deformalize the pre-dicted value of the model to obtain the predicted valueof porosity parameters corresponding to the actualvalueStep 6 the predicted value is compared with the actualvalue and the error is analyzed )e prediction

h0

x0 x1

h1

GRU3 GRU3 GRU3

GRU3 GRU3 GRU3

GRU3 GRU3 GRU3

hellip

hellip

hellip

hellip

hellip

Hiddenlayer

Input layer

Output layer

ht

xt

Figure 4 Structure diagram of three-layer GRU neural networkmodel

Mathematical Problems in Engineering 5

performance of the model is evaluated according to thecorresponding evaluation index

3 Data Processing and Analysis

31Data Preparation As shown in Figure 6(a) Ordos Basinis a large superimposed basin in the central part of Chinawith huge oil and gas resources In China it is considered asone of the basins with the greatest potential for oil and gasreserves and production growth in China and it is a pet-roliferous basin with stratigraphic and lithologic traps as itsmain structural traps )e accumulated exploration ofnatural gas reserves in Ordos Basin is about 27times1012m3and the geological resources to be explored are about125times1012m3 which indicates that Ordos Basin is still in theearly stage of exploration In addition Ordos Basin is alsorich in novel oil and gas resources such as coalbed methaneshale gas and tight sandstone gas Additionally the amountof geology resources of coalbedmethane in the basin is about986times1012m3 of which the recoverable amount is about179times1012m3 of recoverable )e amount of geologicalresources of shale gas in the basin is about 53times1012m3 ofwhich the recoverable reserves are about 291times 1012m3 )egeological resources of tight sandstone gas in the basin areabout 784times1012m3 of which the recoverable amount isabout 353times1012m3 [18] For conventional or unconven-tional oil and gas resources it can be seen that Ordos Basinhas great exploration potential)erefore a fast and accuratemethod is needed to obtain porosity information which isan important parameter for oil and gas exploration

As shown in Figure 6(b) well E1 is regarded as shale gaswell in the northeastern part of Ordos Basin China It is avery important step for the database including logging datato prepare for the construction of the model In this studyavailable well logs of well E1 include the spontaneous po-tential (SP) compensated neutron log (CNL) compressionalwave slowness (DTC) resistivity (RT) Uranium (U) naturalgamma-ray (GR) bulk density (DEN) Potassium (K) and)orium (TH) Table 1 shows the summary of the recordedlogging parameters data for well E1

Figure 7 presents the logging parameters plot for well E1)is study is very vital and meaningful to oil and gas ex-ploration areas since it is very important to obtain porosityinformation based on logging parameters in reservoirevaluation

32 Data Analysis For the model it is very important toselect suitable logging input when preparing for accurateporosity prediction In this study Kendall rank correlationcoefficient τ Spearman rank correlation coefficient ρ andPearson linear correlation coefficient R based on Copulafunction are used to quantitatively calculate the correlationbetween logging parameters and porosity parameters )ecomparison of absolute values of three correlation coeffi-cients is shown in Figure 8

As can be seen from Figure 8 Pearson correlationanalysis often ignores the nonlinear correlation betweenlogging parameters and porosity In the correlation analysisbetween logging data and porosity the correlation coeffi-cients of DTC DEN and CNL with porosity are relatively

Start

Data

Correlation analysis

Data preprocessing

Training set and testing set

Testing set Training set

Model parameter initialization

Devising GRU model

Training GRU model

Achieving expected results

Save model

Adjusting parameter

Testing the trained model

Get the prediction results

No

Yes

Figure 5 Modeling flow chart of porosity parameter prediction model based on CA-GRU

6 Mathematical Problems in Engineering

high indicating that DTC DEN CNL and porosity have astrong correlation with porosity Although the Pearsoncorrelation coefficient of GR and porosity is relatively lowhowever the correlation analysis method based on Copulafunction obtains higher τ and ρ which shows that the linearcorrelation between GR and porosity is low but the non-linear correlation is high and there is a strong nonlinearcorrelation between them To sum up linear correlation andnonlinear correlation analysis methods are used to optimizeconventional logging parameters in this study Finally fourlogging parameters DTC DEN CNL and GR are selectedas independent variables of network modeling and a po-rosity prediction model is established

4 Results and Discussion

As mentioned above a set of 5165 data points from well E1 hasbeen used to build themodel)is set of data is divided into testand training subsets by depth In this study the training subsetconsists of the first 3874 of all data while the test subset oftesting consists of the rest data points In comparison RNNGRU CA-GRU and multiple linear regression (MLR) modelswere selected and applied to predict porosity

In the beginning all data are standardized ranging fromzero to one with the following equation (11) is used to testand train data sets in the RNN GRU and CA-GRU modelsIn addition the convergence of neural nets may be

BeijingCityStudy area

0 800 1600km 0 38000 76000

km

Weibei uplif

Qin yang

Hua chiBelt

Margin

Western

Ji yuan

Ding bian Jin bian

Yu lin

Wu shen qi

Well E1 Shen mu

Belt

Fold

Jinxi

Yimengi uplift

Hang jin qiErdos shi

Yin chuan

35deg

36deg

37deg

38deg

39deg

40deg

35deg

36deg

37deg

38deg

39deg

40deg

107deg 108deg 109deg 110deg

107deg 108deg 109deg 110deg

Basin boundary faultTectonic element boundaryWell E1City

N N

(a) (b)

Figure 6 (a))e position of Ordos Basin in China (b) the structural belts of Ordos Basin and the sampling well near the Yimeng uplift [15]

Table 1 Summary of the recorded logging parameters for well E1

SP (mV) GR (API) TDC (μsft) RT (Ωmiddotm) U (ppm) K () TH (ppm) DEN (gcm3) CNL ()Miv 6764 2065 5442 944 209 040 171 245 739Mav 7498 6675 9156 5201 736 233 1190 272 2727Average 7217 4404 7433 2329 439 108 543 257 1618SD 138 774 689 709 108 031 177 004 378Mivminimum value Mavmaximum value SD standard deviation

Mathematical Problems in Engineering 7

guaranteed by the preprocessing which made the calculatingspeed of network methods increase

xlowast

x minus xmin

xmax minus xmin (11)

where xlowast is the normalized data x is the original data themaximum and minimum of the original dataset are rep-resented by xmax and xmin respectively

In the study various concepts related to statistics includingcorrelation coefficient (R) mean absolute error (MAE) variance

accounted for (VAF) and root mean square error (RMSE) areused to compare performance prediction In addition theseperformance indicators could provide a sense of how great theperformance of the prediction model is in terms of the actualvalue )e following equations give the above standard tools

R criteria can be described as follows

R

1 minus1113936

ni1 (1113954p minus p)i1113858 1113859

2

1113936ni1 (1113954p)

2i minus (1n) 1113936

ni1 (1113954p)

2i

11139741113972

(12)

MAE criteria may be depicted as follows

MAE 1n

1113944

n

i1(1113954p minus p) (13)

VAF is usually used to assess the accuracy of one modelthrough making a comparison between the assessed valuesand the evaluated output of the model and VAF criteria canbe computed with this equation as follows

VAF 1 minusvar (1113954p)i minus (p)i1113858 1113859

var (1113954p)i1113858 11138591113888 1113889 times 100 (14)

)e RMSE is traditionally applied to monitor the qualityof error function of the model )e performance of themodel increases with RMSE decreasing RMSE criteria canbe computed with this equation as follows

RMSE

1n

1113944

n

i1(1113954p)i minus (p)i1113858 1113859

2

11139741113972

(15)

where for equations (12)ndash(15) p is the measured porosity 1113954p

denotes the assessed porosity while n is the amount of testingdata points

0 5 10 70 75 40 80 2 4 6 8 60 90 24 26 10 20 0 1 2 0 20 40 60

0 5 10 70 75 40 80 2 4 6 8 60 90 24 26 10 20 0 1 2 0 20 40 60

3440

3420

3400

3380

3360

3340

Dep

th (m

)TH (10ndash6) U (10ndash6)SP (mv) GR (API) DTC (μs) DEN (gcm3) CNL () K () RT (Ωm)

Figure 7 Logging parameters plot for well E1

SP GR U DTC DEN CNL KR TTHLogging parameters

0

005

01

015

02

025

03

035

04

045

Corr

elat

ion

coef

ficie

nt

ρR

T

Figure 8 Correlation between logging data and porosityparameter

8 Mathematical Problems in Engineering

In the study of machine learning robustness is a keycharacteristic Considering that the method of selectingtraining and testing data sets avoids the impact on the ro-bustness of GRU we randomly selected ten training andtesting set from 5165 data points in well E1 )e randomlyselected sets are shown in Figure 9(a) where the colorrepresents the corresponding depth of the sample For eachof these ten cases GRU can be modeled and its RMSE of thetraining set and the testing set can be calculated respectivelyand the results are shown in Figure 9(b)

As shown in Figure 9(b) the difference of trainingsamples will lead to inconsistent prediction errors of GRU)erefore it can be concluded that the selection of trainingsamples will bring about changes in GRU prediction errorsIn order to further analyze the influence of different trainingsamples of GRU forecasting error we use statisticalmethods )e above ten cases of RMSE data have carried onthe single factor analysis of variance (the confidence level of5) and the analysis results are shown in Table 2P 0569gt 005 indicating that there are no significantdifferences between RMSE data of 10 cases )roughcomparative analysis we can safely conclude that differenttraining samples had differences in the prediction error ofGRU but there is no significant difference in statistics Itmeans from a statistical point of view the selection methodof training and testing samples had little influence on therobustness of GRU)erefore this study divides the trainingsample set and the test set according to the depth orderwhich conforms to the statistical law and is feasible andpractical

As we all know determining the parameters of the neuralnetwork model is the premise of the successful constructionof the network model In this study adaptive learning rateoptimization algorithm (Adam algorithm) is used to opti-mize the network Adam algorithm combines the advantagesof RMSProp algorithm and AdaGrad algorithm and can

design the independent adaptive learning rate for differentparameters According to the test the best learning rate is0005 the batch is 10 and the time step is 50 According tothe previous studies [18 37] on neural networks the numberof hidden layers and hidden layer nodes has great influenceon the prediction performance of neural networks Fordifferent research fields the number of hidden layers andhidden layer nodes are different Choosing the optimalnumber of hidden layers and hidden layer nodes is the key toensure the prediction accuracy of neural network model)erefore based on the ergodic optimization thinking thisstudy sets the hidden layer value range as [1 10] and thehidden layer node value range as [1 100] By comparing theroot mean square error (RMSE) of the model under differenthidden layers and hidden layer nodes the hidden layer valueand hidden layer node value of the model under the min-imum root mean square error (RMSE) are obtained )eoptimization test results are shown in Figure 10 It can beseen from the figure that too many or too few hidden layersand neurons in the network will lead to drastic changes inroot mean square error (RMSE) of prediction resultsresulting in the decrease of prediction accuracy )roughtraversal optimization the optimal number of hidden layersand nodes for this study is 3 and 41 respectively

According to the above correlation analysis results fourlogging parameters DTC DEN CNL and GR which havestrong correlation with porosity are selected as themodeling

Table 2 )e results of one-way ANOVA

Sum ofsquares df Mean

square F P value

Betweengroups 0077 9 0009 0883 0569

Within groups 0097 10 0010Total 0174 19

3340

3360

3380

3400

3420

3440

Dep

th (m

)

Case

1

Case

3

Case

4

Case

5

Case

6

Case

7

Case

8

Case

9

Case

10

Case

2

Random sets

500045004000350030002500200015001000

500Se

eds

(a)

002040608

112141618

2

RMSE

Case

2

Case

3

Case

4

Case

5

Case

6

Case

7

Case

8

Case

9

Case

10

Case

1

Random sets

TrainingTesting

(b)

Figure 9 Results of ELM with randomly selected training set (a) Ten cases of randomly selected sets (b) )e RMSE of training and testingfor each case

Mathematical Problems in Engineering 9

independent variables of the porosity prediction modelFigure 11 shows the comparison between the actual andpredicted porosity of RNN GRU California-GRU and

MLR models with the depth of E1 well as well as the localenlargement operation It shows that the results of MLRmodels are quite different from the measured porosity while

2

18

16

14

12

2181614121

RMSE

108

64

2

Number of hidden layers Number of hidden layer nodes20

40 6080

100

Figure 10 Model performance under different hidden layers and node numbers

Porosity ()0 5 10 15 20 25

3330

3360

3390

3420

3450

Dep

th (m

)

3370

3375

3380

3385

3390

MeasuredPredicted

0 5 10

(a)

Porosity ()0 5 10 15 20 25

3330

3360

3390

3420

3450

Dep

th (m

)3370

3375

3380

3385

3390

MeasuredPredicted

0 5 10

(b)

Porosity ()0 5 10 15 20 25

3330

3360

3390

3420

3450

Dep

th (m

)

3370

3375

3380

3385

3390

MeasuredPredicted

0 5 10

(c)

Porosity ()0 5 10 15 20 25

3330

3360

3390

3420

3450

Dep

th (m

)

3370

3375

3380

3385

3390

MeasuredPredicted

0 5 10

(d)

Figure 11 A comparison between measured and predicted porosity with RNN GRU CA-GRU and MLR models respectively in well E1(a) A comparison between measured and RNN predicted porosity (b) A comparison between measured and GRU predicted porosity (c) Acomparison between measured and CA-GRU predicted porosity (d) A comparison between measured and MLR predicted porosity

10 Mathematical Problems in Engineering

the results of RNN GRU and California-GRU models arevery consistent with the actual situation

Eventually the statistical indicators mentioned previ-ously were also employed to carry out this comparison andoutcomes of this comparison were presented in Table 3What can be shown in Table 3 was that the three kinds ofporosity prediction models based on deep learning methodwere far superior to those established with the traditionalmultiple linear regression (MLR) method in their predictionaccuracy And by comparing the prediction precision of thethree types deep learning models (RNN GRU and CA-GRU) established in this study it can be evidently shownthat the GRU and CA-GRU models were superior to RNNmodel in prediction precision among which CA-GRU tookthe best precision due to the highest R of 09423 and VAF of887578 and the lowestMAE of 02101 and RMSE of 11412)rough the comparison between the prediction precision ofCA-GRU and GRU there was just slight discrepancy be-tween their precision

To conclude both CA-GRU and GRU models couldprovide a successful porosity prediction performance )eCA-GRU model showed a higher precision in predictingporosity and from Table 3 which we can conclude that CA-GRU modeled in this study took a higher efficiency inpredicting the porosity with its high precision And theverification results in well E1 prove that the CA-GRUmodelwith optimal inputs could be regarded as an efficient tool forpredicting the porosity in particular in the area where thehigh precision porosity data is required

5 Conclusions

Logging parameters reflect the sedimentary characteristics ofacoustic discharge and electric in different geological pe-riods Porosity is the characteristic response of differentformations and has strong time series characteristics Di-rectly predicting porosity by using logging parameters caneffectively reduce the high cost of using special methods suchas rock physical analysis and nuclear magnetic resonancewhich can provide accurate and low-cost decision-makingbasis for the petroleum exploration and development in oil-gas field Deep learning technology can find the nonlinearrelationship between different parameters completely fromthe data which is very suitable for solving the problem ofnonlinear geophysical interpretation It can not only makefull use of the response characteristics of various loggingparameters to different formations at the same time but alsocan get rid of the limitations of linear prediction of tradi-tional empirical formula

Considering the time series characteristics of loggingparameters and porosity parameters a reservoir porosityprediction method based on GRU neural network isproposed in this study By using correlation analysismethod based on Copula function to select sensitivelogging parameters and then uses GRU neural network tobuild prediction model which not only considers theinfluence of strong correlation sample data on the pre-diction of porosity parameters but also takes into accountthe nonlinear mapping relationship between porosityparameters and logging curves as well as the changetrend and correlation of logging information with depth)e correlation measure method based on Copulafunction can optimize the well logging curves which aresensitive to porosity parameters reduce the dimension ofmodel input eliminate the redundancy between vari-ables and improve the overall prediction performance ofthe model )e research results show that the neuralnetwork model of GRU neural network has strong featureextraction ability and can effectively extract deep char-acteristics reflecting porosity parameters from loggingdata Compared with deep learning models such asmultiple linear regression analysis it can predict porosityparameters more accurately and has strong robustnessand anti-interference ability )is study provides a newidea for accurate interpretation of logging data in oil-gasfield exploration and development

Abbreviations

DNN Deep neural networkGRU Gated recurrent unitNMR Nuclear magnetic resonanceCNN Convolutional neural networkRNN Recurrent neural networkSAE Stack autoencoderU UraniumDEN Bulk densityTH )oriumMAE Mean absolute errorRMSE Root mean square errorRNN Recurrent neural networkLSTM Long short-term memoryCA Correlation analysisMLR Multiple linear regressionSP Spontaneous potentialCNL Compensated neutron logDTC Compressional wave slownessRT ResistivityGR Natural gamma-rayK PotassiumR Correlation coefficientVAF Variance accounted forCA-GRU Correlation analysis-gated recurrent unit

Data Availability

)e data used to support the findings of this study areavailable from the corresponding author upon request

Table 3 Comprising CA-GRU model with RNN GRU and MLRmodels by using four performance indicators including correlationcoefficient (R) mean absolute error (MAE) variance accounted for(VAF) as well as root mean square error (RMSE)

Method R MAE VAF() RMSERNN 09034 02728 815798 14952GRU 0928 02303 861089 12643CA-GRU 09423 02101 887578 11412MLR 08489 09035 717449 22589

Mathematical Problems in Engineering 11

Conflicts of Interest

)e authors declare that there are no conflicts of interest

Acknowledgments

)e authors are grateful for the financial support providedby the National Natural Science Foundation of China(41504041) )e authors would also like to thank XiaoyanDeng for assisting in preparation of this manuscript par-ticularly grateful to her for checking grammar style andsyntax in the manuscript

References

[1] S Singh A I Kanli and S Sevgen ldquoA general approach forporosity estimation using artificial neural network method acase study from Kansas gas fieldrdquo Studia Geophysica etGeodaetica vol 60 no 1 pp 130ndash140 2016

[2] Z Zhong and T R Carr ldquoApplication of a new hybrid particleswarm optimization-mixed kernels function-based supportvector machine model for reservoir porosity prediction Acase study in Jacksonburg-Stringtown oil field West VirginiaUSArdquo Interpretation vol 7 no 1 pp T97ndashT112 2019

[3] A Bakhorji H Mustafa and S Aramco Rock PhysicsModeling and Analysis of Elastic Signatures for Intermediate toLow Porosity Sandstones pp 1ndash5 Society of ExplorationGeophysicists Houston TX USA 2012

[4] S Tao Z Pan S Chen and S Tang ldquoCoal seam porosity andfracture heterogeneity of marcolithotypes in the FanzhuangBlock southern Qinshui Basin Chinardquo Journal of NaturalGas Science and Engineering vol 66 pp 148ndash158 2019

[5] S Tao X Zhao D Tang C Deng Q Meng and Y Cui ldquoAmodel for characterizing the continuous distribution of gasstoring space in low-rank coalsrdquo Fuel vol 233 pp 552ndash5572018

[6] S Tao S Chen D Tang X Zhao H Xu and S Li ldquoMaterialcomposition pore structure and adsorption capacity of low-rank coals around the first coalification jump A case ofeastern Junggar Basin Chinardquo Fuel vol 211 pp 804ndash8152018

[7] C Zhang X Wang and L Q Zhu ldquoEstimation of totalporosity in shale formations from element capture loggingand conventional logging datardquo Arabian Journal of Geo-sciences vol 11 no 11 p 264 2018

[8] A Khaksar and C M Griffiths ldquolowastPorosity form sonic log ingas-bearing shaly sandstones field data versus empiricalequationsrdquo Exploration Geophysics vol 29 no 3-4pp 440ndash446 1998

[9] C Guo C Zhang and L Zhu ldquoPredicting the total porosity ofshale gas reservoirsrdquo Petroleum Science and Technologyvol 35 no 10 pp 1022ndash1031 2017

[10] H L Crow R J Enkin J B Percival and H A J RussellldquoDownhole nuclear magnetic resonance logging in glacio-marine sediments near Ottawa Ontario Canadardquo NearSurface Geophysics vol 18 no 6 p 591 2020

[11] J O Parra C Hackert M Bennett and H A CollierldquoPermeability and porosity images based on NMR sonic andseismic reflectivity application to a carbonate aquiferrdquo CeLeading Edge vol 22 no 11 pp 1102ndash1108 2003

[12] E P Leite and A C Vidal ldquo3D porosity prediction fromseismic inversion and neural networksrdquo Computers amp Geo-sciences vol 37 no 8 pp 1174ndash1180 2011

[13] X Shi G Liu Y Cheng et al ldquoBrittleness index prediction inshale gas reservoirs based on efficient network modelsrdquoJournal of Natural Gas Science and Engineering vol 35pp 673ndash685 2016

[14] P Wang S Peng and T He ldquoA novel approach to totalorganic carbon content prediction in shale gas reservoirs withwell logs data Tonghua Basin Chinardquo Journal of Natural GasScience and Engineering vol 55 pp 1ndash15 2018

[15] P Wang and S Peng ldquoA new scheme to improve the per-formance of artificial intelligence techniques for estimatingtotal organic carbon from well logsrdquo Energies vol 11 no 4p 747 2018

[16] F S Feng P Wang Z Wei et al ldquoA new method for pre-dicting the permeability of sandstone in deep reservoirsrdquoGeofluids vol 2020 Article ID 8844464 16 pages 2020

[17] A A Talkhestani ldquoPrediction of effective porosity fromseismic attributes using locally linear model tree algorithmrdquoGeophysical Prospecting vol 63 no 3 pp 680ndash693 2015

[18] P Wang and S Peng ldquoOn a new method of estimating shearwave velocity from conventional well logsrdquo Journal of Pe-troleum Science and Engineering vol 180 pp 105ndash123 2019

[19] F S T Haklidir and M Haklidir ldquoPrediction of reservoirtemperatures using hydrogeochemical data western anatoliageothermal systems (Turkey) a machine learning approachrdquoNatural Resources Research vol 29 no 4 pp 2333ndash23462020

[20] A A Mahmoud S Elkatatny and D Al Shehri ldquoApplicationof machine learning in evaluation of the static youngrsquosmodulus for sandstone formationsrdquo Sustainability vol 12no 5 p 1880 2020

[21] M He H Gu and H Wan ldquoLog interpretation for lithologyand fluid identification using deep neural network combinedwith MAHAKIL in a tight sandstone reservoirrdquo Journal ofPetroleum Science and Engineering vol 194 p 107498 2020

[22] Y Bengio ldquoLearning deep architectures for AIrdquo Foundationsand Trends in Machine Learning vol 2 no 1 pp 1ndash127 2009

[23] G E Hinton S Osindero and Y-W Teh ldquoA fast learningalgorithm for deep belief netsrdquo Neural Computation vol 18no 7 pp 1527ndash1554 2006

[24] L C Mao and J Zhao ldquoA survey on the new generation ofdeep learning in image processingrdquo IEEE Access vol 7pp 172231ndash172263 2019

[25] R Udendhran M Balamurugan A Suresh andR Varatharajan ldquoEnhancing image processing architectureusing deep learning for embedded vision systemsrdquo Micro-processors and Microsystems vol 76 p 103094 2020

[26] R J G Van Sloun R Cohen and Y C Eldar ldquoDeep learningin ultrasound imagingrdquo Proceedings of the IEEE vol 108no 1 pp 11ndash29 2020

[27] J Liu C Wu and J Wang ldquoGated recurrent units basedneural network for time heterogeneous feedback recom-mendationrdquo Information Sciences vol 423 pp 50ndash65 2018

[28] Y Hao Y Sheng and J Wang ldquoVariant gated recurrent unitswith encoders to preprocess packets for payload-aware in-trusion detectionrdquo IEEE Access vol 7 pp 49985ndash49998 2019

[29] F T Dezaki Z B Liao C Luong et al ldquoCardiac phasedetection in echocardiograms with densely gated recurrentneural networks and global extrema lossrdquo IEEE Transactionson Medical Imaging vol 38 no 8 pp 1821ndash1832 2019

[30] ZWang Y DongW Liu and ZMa ldquoA novel fault diagnosisapproach for chillers based on 1-D convolutional neuralnetwork and gated recurrent unitrdquo Sensors vol 20 no 9p 2458 2020

12 Mathematical Problems in Engineering

[31] H Colonius ldquoAn invitation to coupling and copulas withapplications to multisensory modelingrdquo Journal of Mathe-matical Psychology vol 74 pp 2ndash10 2016

[32] N Uyttendaele ldquoOn the estimation of nested Archimedeancopulas a theoretical and an experimental comparisonrdquoComputational Statistics vol 33 no 2 pp 1047ndash1070 2018

[33] P Vincent-Lamarre M Calderini and J-P )iviergeldquoLearning long temporal sequences in spiking networks bymultiplexing neural oscillationsrdquo Frontiers in ComputationalNeuroscience vol 14 p 78 2020

[34] W Huang Y Li and Y Huang ldquoDeep Hybrid NeuralNetwork and Improved Differential Neuroevolution forChaotic Time Series Predictionrdquo IEEE Access vol 8pp 159552ndash159565 2020

[35] Y Wu M Yuan S Dong L Lin and Y Liu ldquoRemaininguseful life estimation of engineered systems using vanillaLSTM neural networksrdquo Neurocomputing vol 275 pp 167ndash179 2018

[36] H Yan Y Qin S Xiang Y Wang and H Chen ldquoLong-termgear life prediction based on ordered neurons LSTM neuralnetworksrdquo Measurement vol 165 p 108205 2020

[37] X Shi J Wang G Liu L Yang X Ge and S Jiang ldquoAp-plication of extreme learning machine and neural networks intotal organic carbon content prediction in organic shale withwire line logsrdquo Journal of Natural Gas Science and Engi-neering vol 33 pp 687ndash702 2016

Mathematical Problems in Engineering 13

Page 4: Research Article On a Deep Learning Method of Estimating ...theporosityandcoreanalysisdata,andtheresultsaregreatly affected by human factors. Nuclear magnetic resonance (NMR) porosity

external RNN cycle structure but also has an internal ldquoLSTMcellrdquo circulation (self-circulation))erefore LSTM does notsimply impose an element by element nonlinearity on theaffine transformation of input and loop units It is similar tothe common recirculating network and each unit not onlyhas the same input and output structure but also has a gatecontrol unit system with more parameters and control in-formation flow )e structure of LSTM hidden layer isshown in Figure 2 Ctminus 1 is the node state of the previoussequence of hidden layers htminus 1 is the output of the previoussequence of hidden layer nodes xt is the input for the hiddenlayer node of the current sequence Ct is the current se-quence hidden layer node state ht is the output of the hiddenlayer node of the current sequence σ is the nonlinear ac-tivation function of sigmoid and tanh is the hyperbolictangent function

Compared with RNN LSTM network is better atlearning the long-term dependence between sequence datawhile LSTM network has complex structure multi pa-rameters and slow convergence speed

24 Gated Recurrent Unit (GRU) Neural Network Gatedrecurrent unit (GRU) neural network as an importantvariant of LSTM network is the optimization and im-provement of LSTM network It inherits the ability of LSTMnetwork to deal with nonlinear and time series problemsMoreover it not only retains the memory unit function ofLSTM network but also simplifies the structure and reducesthe parameters thus greatly improving the training speed[34] )e structure of the GRU neural network is shown inFigure 3 where Zt represents the update gate state Rtrepresents the reset gate state andHt represents the pendingoutput of the current neuron GRU neural network improvesthe design of ldquogaterdquo on the basis of LSTM network that isthe original cell structure composed of three ldquoGatesrdquo isoptimized to a cell structure composed of two ldquoGatesrdquo Inshort the gating cycle unit is consists of a reset gate and anupdate gate

)e state of reset gate and update gate at time t is definedas rt and Ζt respectively

rt σ Wrxt + Urhtminus 1( 1113857

Ζt σ Wzxt + Uzhtminus 1( 11138571113896 (8)

where W and U are weight matrices and xt is input data )ehidden state ht and the candidate hidden state 1113957ht can becalculated according to the following formula

ht 1 minus Ζt( 1113857htminus 1 + Ζt1113957ht

1113957ht tan h Whxt + Ur rt lowast htminus 1( 11138571113858 1113859

⎧⎨

⎩ (9)

Two different activation functions in equations (8) and(9) can be defined as follows

σ(x) 1

1 + exp(x)

tan h(x) 1 minus exp(2x)

1 + exp(2x)

⎧⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎩

(10)

25 Structure of GRU Neural Network Prediction ModelIn the prediction of reservoir porosity parameters thelogging curves from shallow to deep reflect the formation

times

σ σ tanh

tanh

times

σ

times

Ct-1

ht-1

Ct

Input xt

Output ht

ht

+

Figure 2 Hidden layer structure of LSTM network

times +

timestimes

1-Zr

σ σ

Ct-1 Ct

Input xt

Output ht

tanh

rt zt ht

Figure 3 Schematic diagram of GRU neural network

Input layer

Hidden layer

Output layer

Figure 1 Schematic diagram of RNN structure with only onehidden layer

4 Mathematical Problems in Engineering

characteristics of different geological periods and theirchange trends include important information of physicalparameters [14] When using the traditional statisticalanalysis and the conventional machine learning method topredict the porosity parameters it is easy to destroy thepotential internal relations in the historical series of loggingparameters and reduce the accuracy of the prediction resultsUnlike other machine learning or deep learning methodsGRU neural network has long-term memory ability [35] Bydealing with the long-term dependence between series dataGRU neural network can effectively reduce the influence ofsuch relationships and its internal control mechanism canalso automatically learn time-series features [36] Figure 4shows the structure of the three-layer GRU neural networkmodel

As can be seen from Figure 4 the structure of the GRUneural network model includes input layer hidden layerand output layer in which the hidden layer is the core part ofnetwork structure In the process of training it is necessaryto optimize and adjust the super parameters of the GRUneural network model structure including the number ofhidden layer layers and the number of hidden layer neurons)eoretically the more hidden layers and the number ofneurons the better the model performance the deeper andmore complex the network the higher the prediction ac-curacy However some studies [13 18] have shown that toomany hidden layer numbers and neuron numbers will leadto training difficulties and over fitting which will reduce theprediction accuracy of the model If the network is tooshallow and simple it will easily lead to insufficient fittingand fail to meet the expected requirements )erefore theselection of the number of hidden layers and the number ofneurons is very important to the prediction performance ofthe network We need to balance the learning ability of thenetwork with the complexity of the training and the re-quirements of the prediction accuracy and determine theoptimal value of nodes and number of hidden layersaccording to experience and many experimental results Inaddition the optimization of training parameters such aslearning rate batch size and maximum iteration times canreduce the complexity of the model to a certain extent andimprove the convergence speed and prediction accuracy ofthe model

)e training process of GRU neural network can beroughly divided into three steps as follows

Step 1 input training data into the network calculatethe output of GRU neural network unit from shallowlayer to deep layer along the forward propagation di-rection and obtain the predicted output value corre-sponding to the input data at the current time pointStep 2 the error of each neuron is calculated along theback-propagation direction Error back-propagation ofthe GRU neural network includes propagation alongthe time sequence and the propagation layer by layerbetween the network layersStep 3 the gradient of each weight is calculatedaccording to the error calculated by back-propagation

and the parameters of the weight gradient adjustmentnetwork are calculated by using the learning rate-adaptive optimization algorithm (Adam algorithm)Finally repeat the above steps to iteratively optimizethe network

26 Prediction Model Based on CA-GRU )e modelingprocess of the combination forecasting model based on CA-GRU is shown in Figure 5 which mainly includes the fol-lowing six steps

Step 1 the obtained logging curves and porosity pa-rameters are used as the database )e Kendall rankcorrelation coefficient τ Spearman rank correlationcoefficient ρ and Pearson linear correlation coefficientP based on Copula function are used to quantitativelycalculate and analyze the correlation degree betweenthem and the logging curves sensitive to the predictionparameters are selected to form new sample dataStep 2 the new sample data are standardized anddivided into the training set and test set according to acertain proportionStep 3 the GRU neural network model is constructedfor porosity prediction and the network parameters areinitialized )e number of network layers and thenumber of hidden layer neurons are determinedaccording to the experimentStep 4 during the training process the networkstructure is continuously optimized until the trainingerror of the model reaches the preset target and thenthe model is savedStep 5 test the trained GRU neural network modelwith the segmented test set and deformalize the pre-dicted value of the model to obtain the predicted valueof porosity parameters corresponding to the actualvalueStep 6 the predicted value is compared with the actualvalue and the error is analyzed )e prediction

h0

x0 x1

h1

GRU3 GRU3 GRU3

GRU3 GRU3 GRU3

GRU3 GRU3 GRU3

hellip

hellip

hellip

hellip

hellip

Hiddenlayer

Input layer

Output layer

ht

xt

Figure 4 Structure diagram of three-layer GRU neural networkmodel

Mathematical Problems in Engineering 5

performance of the model is evaluated according to thecorresponding evaluation index

3 Data Processing and Analysis

31Data Preparation As shown in Figure 6(a) Ordos Basinis a large superimposed basin in the central part of Chinawith huge oil and gas resources In China it is considered asone of the basins with the greatest potential for oil and gasreserves and production growth in China and it is a pet-roliferous basin with stratigraphic and lithologic traps as itsmain structural traps )e accumulated exploration ofnatural gas reserves in Ordos Basin is about 27times1012m3and the geological resources to be explored are about125times1012m3 which indicates that Ordos Basin is still in theearly stage of exploration In addition Ordos Basin is alsorich in novel oil and gas resources such as coalbed methaneshale gas and tight sandstone gas Additionally the amountof geology resources of coalbedmethane in the basin is about986times1012m3 of which the recoverable amount is about179times1012m3 of recoverable )e amount of geologicalresources of shale gas in the basin is about 53times1012m3 ofwhich the recoverable reserves are about 291times 1012m3 )egeological resources of tight sandstone gas in the basin areabout 784times1012m3 of which the recoverable amount isabout 353times1012m3 [18] For conventional or unconven-tional oil and gas resources it can be seen that Ordos Basinhas great exploration potential)erefore a fast and accuratemethod is needed to obtain porosity information which isan important parameter for oil and gas exploration

As shown in Figure 6(b) well E1 is regarded as shale gaswell in the northeastern part of Ordos Basin China It is avery important step for the database including logging datato prepare for the construction of the model In this studyavailable well logs of well E1 include the spontaneous po-tential (SP) compensated neutron log (CNL) compressionalwave slowness (DTC) resistivity (RT) Uranium (U) naturalgamma-ray (GR) bulk density (DEN) Potassium (K) and)orium (TH) Table 1 shows the summary of the recordedlogging parameters data for well E1

Figure 7 presents the logging parameters plot for well E1)is study is very vital and meaningful to oil and gas ex-ploration areas since it is very important to obtain porosityinformation based on logging parameters in reservoirevaluation

32 Data Analysis For the model it is very important toselect suitable logging input when preparing for accurateporosity prediction In this study Kendall rank correlationcoefficient τ Spearman rank correlation coefficient ρ andPearson linear correlation coefficient R based on Copulafunction are used to quantitatively calculate the correlationbetween logging parameters and porosity parameters )ecomparison of absolute values of three correlation coeffi-cients is shown in Figure 8

As can be seen from Figure 8 Pearson correlationanalysis often ignores the nonlinear correlation betweenlogging parameters and porosity In the correlation analysisbetween logging data and porosity the correlation coeffi-cients of DTC DEN and CNL with porosity are relatively

Start

Data

Correlation analysis

Data preprocessing

Training set and testing set

Testing set Training set

Model parameter initialization

Devising GRU model

Training GRU model

Achieving expected results

Save model

Adjusting parameter

Testing the trained model

Get the prediction results

No

Yes

Figure 5 Modeling flow chart of porosity parameter prediction model based on CA-GRU

6 Mathematical Problems in Engineering

high indicating that DTC DEN CNL and porosity have astrong correlation with porosity Although the Pearsoncorrelation coefficient of GR and porosity is relatively lowhowever the correlation analysis method based on Copulafunction obtains higher τ and ρ which shows that the linearcorrelation between GR and porosity is low but the non-linear correlation is high and there is a strong nonlinearcorrelation between them To sum up linear correlation andnonlinear correlation analysis methods are used to optimizeconventional logging parameters in this study Finally fourlogging parameters DTC DEN CNL and GR are selectedas independent variables of network modeling and a po-rosity prediction model is established

4 Results and Discussion

As mentioned above a set of 5165 data points from well E1 hasbeen used to build themodel)is set of data is divided into testand training subsets by depth In this study the training subsetconsists of the first 3874 of all data while the test subset oftesting consists of the rest data points In comparison RNNGRU CA-GRU and multiple linear regression (MLR) modelswere selected and applied to predict porosity

In the beginning all data are standardized ranging fromzero to one with the following equation (11) is used to testand train data sets in the RNN GRU and CA-GRU modelsIn addition the convergence of neural nets may be

BeijingCityStudy area

0 800 1600km 0 38000 76000

km

Weibei uplif

Qin yang

Hua chiBelt

Margin

Western

Ji yuan

Ding bian Jin bian

Yu lin

Wu shen qi

Well E1 Shen mu

Belt

Fold

Jinxi

Yimengi uplift

Hang jin qiErdos shi

Yin chuan

35deg

36deg

37deg

38deg

39deg

40deg

35deg

36deg

37deg

38deg

39deg

40deg

107deg 108deg 109deg 110deg

107deg 108deg 109deg 110deg

Basin boundary faultTectonic element boundaryWell E1City

N N

(a) (b)

Figure 6 (a))e position of Ordos Basin in China (b) the structural belts of Ordos Basin and the sampling well near the Yimeng uplift [15]

Table 1 Summary of the recorded logging parameters for well E1

SP (mV) GR (API) TDC (μsft) RT (Ωmiddotm) U (ppm) K () TH (ppm) DEN (gcm3) CNL ()Miv 6764 2065 5442 944 209 040 171 245 739Mav 7498 6675 9156 5201 736 233 1190 272 2727Average 7217 4404 7433 2329 439 108 543 257 1618SD 138 774 689 709 108 031 177 004 378Mivminimum value Mavmaximum value SD standard deviation

Mathematical Problems in Engineering 7

guaranteed by the preprocessing which made the calculatingspeed of network methods increase

xlowast

x minus xmin

xmax minus xmin (11)

where xlowast is the normalized data x is the original data themaximum and minimum of the original dataset are rep-resented by xmax and xmin respectively

In the study various concepts related to statistics includingcorrelation coefficient (R) mean absolute error (MAE) variance

accounted for (VAF) and root mean square error (RMSE) areused to compare performance prediction In addition theseperformance indicators could provide a sense of how great theperformance of the prediction model is in terms of the actualvalue )e following equations give the above standard tools

R criteria can be described as follows

R

1 minus1113936

ni1 (1113954p minus p)i1113858 1113859

2

1113936ni1 (1113954p)

2i minus (1n) 1113936

ni1 (1113954p)

2i

11139741113972

(12)

MAE criteria may be depicted as follows

MAE 1n

1113944

n

i1(1113954p minus p) (13)

VAF is usually used to assess the accuracy of one modelthrough making a comparison between the assessed valuesand the evaluated output of the model and VAF criteria canbe computed with this equation as follows

VAF 1 minusvar (1113954p)i minus (p)i1113858 1113859

var (1113954p)i1113858 11138591113888 1113889 times 100 (14)

)e RMSE is traditionally applied to monitor the qualityof error function of the model )e performance of themodel increases with RMSE decreasing RMSE criteria canbe computed with this equation as follows

RMSE

1n

1113944

n

i1(1113954p)i minus (p)i1113858 1113859

2

11139741113972

(15)

where for equations (12)ndash(15) p is the measured porosity 1113954p

denotes the assessed porosity while n is the amount of testingdata points

0 5 10 70 75 40 80 2 4 6 8 60 90 24 26 10 20 0 1 2 0 20 40 60

0 5 10 70 75 40 80 2 4 6 8 60 90 24 26 10 20 0 1 2 0 20 40 60

3440

3420

3400

3380

3360

3340

Dep

th (m

)TH (10ndash6) U (10ndash6)SP (mv) GR (API) DTC (μs) DEN (gcm3) CNL () K () RT (Ωm)

Figure 7 Logging parameters plot for well E1

SP GR U DTC DEN CNL KR TTHLogging parameters

0

005

01

015

02

025

03

035

04

045

Corr

elat

ion

coef

ficie

nt

ρR

T

Figure 8 Correlation between logging data and porosityparameter

8 Mathematical Problems in Engineering

In the study of machine learning robustness is a keycharacteristic Considering that the method of selectingtraining and testing data sets avoids the impact on the ro-bustness of GRU we randomly selected ten training andtesting set from 5165 data points in well E1 )e randomlyselected sets are shown in Figure 9(a) where the colorrepresents the corresponding depth of the sample For eachof these ten cases GRU can be modeled and its RMSE of thetraining set and the testing set can be calculated respectivelyand the results are shown in Figure 9(b)

As shown in Figure 9(b) the difference of trainingsamples will lead to inconsistent prediction errors of GRU)erefore it can be concluded that the selection of trainingsamples will bring about changes in GRU prediction errorsIn order to further analyze the influence of different trainingsamples of GRU forecasting error we use statisticalmethods )e above ten cases of RMSE data have carried onthe single factor analysis of variance (the confidence level of5) and the analysis results are shown in Table 2P 0569gt 005 indicating that there are no significantdifferences between RMSE data of 10 cases )roughcomparative analysis we can safely conclude that differenttraining samples had differences in the prediction error ofGRU but there is no significant difference in statistics Itmeans from a statistical point of view the selection methodof training and testing samples had little influence on therobustness of GRU)erefore this study divides the trainingsample set and the test set according to the depth orderwhich conforms to the statistical law and is feasible andpractical

As we all know determining the parameters of the neuralnetwork model is the premise of the successful constructionof the network model In this study adaptive learning rateoptimization algorithm (Adam algorithm) is used to opti-mize the network Adam algorithm combines the advantagesof RMSProp algorithm and AdaGrad algorithm and can

design the independent adaptive learning rate for differentparameters According to the test the best learning rate is0005 the batch is 10 and the time step is 50 According tothe previous studies [18 37] on neural networks the numberof hidden layers and hidden layer nodes has great influenceon the prediction performance of neural networks Fordifferent research fields the number of hidden layers andhidden layer nodes are different Choosing the optimalnumber of hidden layers and hidden layer nodes is the key toensure the prediction accuracy of neural network model)erefore based on the ergodic optimization thinking thisstudy sets the hidden layer value range as [1 10] and thehidden layer node value range as [1 100] By comparing theroot mean square error (RMSE) of the model under differenthidden layers and hidden layer nodes the hidden layer valueand hidden layer node value of the model under the min-imum root mean square error (RMSE) are obtained )eoptimization test results are shown in Figure 10 It can beseen from the figure that too many or too few hidden layersand neurons in the network will lead to drastic changes inroot mean square error (RMSE) of prediction resultsresulting in the decrease of prediction accuracy )roughtraversal optimization the optimal number of hidden layersand nodes for this study is 3 and 41 respectively

According to the above correlation analysis results fourlogging parameters DTC DEN CNL and GR which havestrong correlation with porosity are selected as themodeling

Table 2 )e results of one-way ANOVA

Sum ofsquares df Mean

square F P value

Betweengroups 0077 9 0009 0883 0569

Within groups 0097 10 0010Total 0174 19

3340

3360

3380

3400

3420

3440

Dep

th (m

)

Case

1

Case

3

Case

4

Case

5

Case

6

Case

7

Case

8

Case

9

Case

10

Case

2

Random sets

500045004000350030002500200015001000

500Se

eds

(a)

002040608

112141618

2

RMSE

Case

2

Case

3

Case

4

Case

5

Case

6

Case

7

Case

8

Case

9

Case

10

Case

1

Random sets

TrainingTesting

(b)

Figure 9 Results of ELM with randomly selected training set (a) Ten cases of randomly selected sets (b) )e RMSE of training and testingfor each case

Mathematical Problems in Engineering 9

independent variables of the porosity prediction modelFigure 11 shows the comparison between the actual andpredicted porosity of RNN GRU California-GRU and

MLR models with the depth of E1 well as well as the localenlargement operation It shows that the results of MLRmodels are quite different from the measured porosity while

2

18

16

14

12

2181614121

RMSE

108

64

2

Number of hidden layers Number of hidden layer nodes20

40 6080

100

Figure 10 Model performance under different hidden layers and node numbers

Porosity ()0 5 10 15 20 25

3330

3360

3390

3420

3450

Dep

th (m

)

3370

3375

3380

3385

3390

MeasuredPredicted

0 5 10

(a)

Porosity ()0 5 10 15 20 25

3330

3360

3390

3420

3450

Dep

th (m

)3370

3375

3380

3385

3390

MeasuredPredicted

0 5 10

(b)

Porosity ()0 5 10 15 20 25

3330

3360

3390

3420

3450

Dep

th (m

)

3370

3375

3380

3385

3390

MeasuredPredicted

0 5 10

(c)

Porosity ()0 5 10 15 20 25

3330

3360

3390

3420

3450

Dep

th (m

)

3370

3375

3380

3385

3390

MeasuredPredicted

0 5 10

(d)

Figure 11 A comparison between measured and predicted porosity with RNN GRU CA-GRU and MLR models respectively in well E1(a) A comparison between measured and RNN predicted porosity (b) A comparison between measured and GRU predicted porosity (c) Acomparison between measured and CA-GRU predicted porosity (d) A comparison between measured and MLR predicted porosity

10 Mathematical Problems in Engineering

the results of RNN GRU and California-GRU models arevery consistent with the actual situation

Eventually the statistical indicators mentioned previ-ously were also employed to carry out this comparison andoutcomes of this comparison were presented in Table 3What can be shown in Table 3 was that the three kinds ofporosity prediction models based on deep learning methodwere far superior to those established with the traditionalmultiple linear regression (MLR) method in their predictionaccuracy And by comparing the prediction precision of thethree types deep learning models (RNN GRU and CA-GRU) established in this study it can be evidently shownthat the GRU and CA-GRU models were superior to RNNmodel in prediction precision among which CA-GRU tookthe best precision due to the highest R of 09423 and VAF of887578 and the lowestMAE of 02101 and RMSE of 11412)rough the comparison between the prediction precision ofCA-GRU and GRU there was just slight discrepancy be-tween their precision

To conclude both CA-GRU and GRU models couldprovide a successful porosity prediction performance )eCA-GRU model showed a higher precision in predictingporosity and from Table 3 which we can conclude that CA-GRU modeled in this study took a higher efficiency inpredicting the porosity with its high precision And theverification results in well E1 prove that the CA-GRUmodelwith optimal inputs could be regarded as an efficient tool forpredicting the porosity in particular in the area where thehigh precision porosity data is required

5 Conclusions

Logging parameters reflect the sedimentary characteristics ofacoustic discharge and electric in different geological pe-riods Porosity is the characteristic response of differentformations and has strong time series characteristics Di-rectly predicting porosity by using logging parameters caneffectively reduce the high cost of using special methods suchas rock physical analysis and nuclear magnetic resonancewhich can provide accurate and low-cost decision-makingbasis for the petroleum exploration and development in oil-gas field Deep learning technology can find the nonlinearrelationship between different parameters completely fromthe data which is very suitable for solving the problem ofnonlinear geophysical interpretation It can not only makefull use of the response characteristics of various loggingparameters to different formations at the same time but alsocan get rid of the limitations of linear prediction of tradi-tional empirical formula

Considering the time series characteristics of loggingparameters and porosity parameters a reservoir porosityprediction method based on GRU neural network isproposed in this study By using correlation analysismethod based on Copula function to select sensitivelogging parameters and then uses GRU neural network tobuild prediction model which not only considers theinfluence of strong correlation sample data on the pre-diction of porosity parameters but also takes into accountthe nonlinear mapping relationship between porosityparameters and logging curves as well as the changetrend and correlation of logging information with depth)e correlation measure method based on Copulafunction can optimize the well logging curves which aresensitive to porosity parameters reduce the dimension ofmodel input eliminate the redundancy between vari-ables and improve the overall prediction performance ofthe model )e research results show that the neuralnetwork model of GRU neural network has strong featureextraction ability and can effectively extract deep char-acteristics reflecting porosity parameters from loggingdata Compared with deep learning models such asmultiple linear regression analysis it can predict porosityparameters more accurately and has strong robustnessand anti-interference ability )is study provides a newidea for accurate interpretation of logging data in oil-gasfield exploration and development

Abbreviations

DNN Deep neural networkGRU Gated recurrent unitNMR Nuclear magnetic resonanceCNN Convolutional neural networkRNN Recurrent neural networkSAE Stack autoencoderU UraniumDEN Bulk densityTH )oriumMAE Mean absolute errorRMSE Root mean square errorRNN Recurrent neural networkLSTM Long short-term memoryCA Correlation analysisMLR Multiple linear regressionSP Spontaneous potentialCNL Compensated neutron logDTC Compressional wave slownessRT ResistivityGR Natural gamma-rayK PotassiumR Correlation coefficientVAF Variance accounted forCA-GRU Correlation analysis-gated recurrent unit

Data Availability

)e data used to support the findings of this study areavailable from the corresponding author upon request

Table 3 Comprising CA-GRU model with RNN GRU and MLRmodels by using four performance indicators including correlationcoefficient (R) mean absolute error (MAE) variance accounted for(VAF) as well as root mean square error (RMSE)

Method R MAE VAF() RMSERNN 09034 02728 815798 14952GRU 0928 02303 861089 12643CA-GRU 09423 02101 887578 11412MLR 08489 09035 717449 22589

Mathematical Problems in Engineering 11

Conflicts of Interest

)e authors declare that there are no conflicts of interest

Acknowledgments

)e authors are grateful for the financial support providedby the National Natural Science Foundation of China(41504041) )e authors would also like to thank XiaoyanDeng for assisting in preparation of this manuscript par-ticularly grateful to her for checking grammar style andsyntax in the manuscript

References

[1] S Singh A I Kanli and S Sevgen ldquoA general approach forporosity estimation using artificial neural network method acase study from Kansas gas fieldrdquo Studia Geophysica etGeodaetica vol 60 no 1 pp 130ndash140 2016

[2] Z Zhong and T R Carr ldquoApplication of a new hybrid particleswarm optimization-mixed kernels function-based supportvector machine model for reservoir porosity prediction Acase study in Jacksonburg-Stringtown oil field West VirginiaUSArdquo Interpretation vol 7 no 1 pp T97ndashT112 2019

[3] A Bakhorji H Mustafa and S Aramco Rock PhysicsModeling and Analysis of Elastic Signatures for Intermediate toLow Porosity Sandstones pp 1ndash5 Society of ExplorationGeophysicists Houston TX USA 2012

[4] S Tao Z Pan S Chen and S Tang ldquoCoal seam porosity andfracture heterogeneity of marcolithotypes in the FanzhuangBlock southern Qinshui Basin Chinardquo Journal of NaturalGas Science and Engineering vol 66 pp 148ndash158 2019

[5] S Tao X Zhao D Tang C Deng Q Meng and Y Cui ldquoAmodel for characterizing the continuous distribution of gasstoring space in low-rank coalsrdquo Fuel vol 233 pp 552ndash5572018

[6] S Tao S Chen D Tang X Zhao H Xu and S Li ldquoMaterialcomposition pore structure and adsorption capacity of low-rank coals around the first coalification jump A case ofeastern Junggar Basin Chinardquo Fuel vol 211 pp 804ndash8152018

[7] C Zhang X Wang and L Q Zhu ldquoEstimation of totalporosity in shale formations from element capture loggingand conventional logging datardquo Arabian Journal of Geo-sciences vol 11 no 11 p 264 2018

[8] A Khaksar and C M Griffiths ldquolowastPorosity form sonic log ingas-bearing shaly sandstones field data versus empiricalequationsrdquo Exploration Geophysics vol 29 no 3-4pp 440ndash446 1998

[9] C Guo C Zhang and L Zhu ldquoPredicting the total porosity ofshale gas reservoirsrdquo Petroleum Science and Technologyvol 35 no 10 pp 1022ndash1031 2017

[10] H L Crow R J Enkin J B Percival and H A J RussellldquoDownhole nuclear magnetic resonance logging in glacio-marine sediments near Ottawa Ontario Canadardquo NearSurface Geophysics vol 18 no 6 p 591 2020

[11] J O Parra C Hackert M Bennett and H A CollierldquoPermeability and porosity images based on NMR sonic andseismic reflectivity application to a carbonate aquiferrdquo CeLeading Edge vol 22 no 11 pp 1102ndash1108 2003

[12] E P Leite and A C Vidal ldquo3D porosity prediction fromseismic inversion and neural networksrdquo Computers amp Geo-sciences vol 37 no 8 pp 1174ndash1180 2011

[13] X Shi G Liu Y Cheng et al ldquoBrittleness index prediction inshale gas reservoirs based on efficient network modelsrdquoJournal of Natural Gas Science and Engineering vol 35pp 673ndash685 2016

[14] P Wang S Peng and T He ldquoA novel approach to totalorganic carbon content prediction in shale gas reservoirs withwell logs data Tonghua Basin Chinardquo Journal of Natural GasScience and Engineering vol 55 pp 1ndash15 2018

[15] P Wang and S Peng ldquoA new scheme to improve the per-formance of artificial intelligence techniques for estimatingtotal organic carbon from well logsrdquo Energies vol 11 no 4p 747 2018

[16] F S Feng P Wang Z Wei et al ldquoA new method for pre-dicting the permeability of sandstone in deep reservoirsrdquoGeofluids vol 2020 Article ID 8844464 16 pages 2020

[17] A A Talkhestani ldquoPrediction of effective porosity fromseismic attributes using locally linear model tree algorithmrdquoGeophysical Prospecting vol 63 no 3 pp 680ndash693 2015

[18] P Wang and S Peng ldquoOn a new method of estimating shearwave velocity from conventional well logsrdquo Journal of Pe-troleum Science and Engineering vol 180 pp 105ndash123 2019

[19] F S T Haklidir and M Haklidir ldquoPrediction of reservoirtemperatures using hydrogeochemical data western anatoliageothermal systems (Turkey) a machine learning approachrdquoNatural Resources Research vol 29 no 4 pp 2333ndash23462020

[20] A A Mahmoud S Elkatatny and D Al Shehri ldquoApplicationof machine learning in evaluation of the static youngrsquosmodulus for sandstone formationsrdquo Sustainability vol 12no 5 p 1880 2020

[21] M He H Gu and H Wan ldquoLog interpretation for lithologyand fluid identification using deep neural network combinedwith MAHAKIL in a tight sandstone reservoirrdquo Journal ofPetroleum Science and Engineering vol 194 p 107498 2020

[22] Y Bengio ldquoLearning deep architectures for AIrdquo Foundationsand Trends in Machine Learning vol 2 no 1 pp 1ndash127 2009

[23] G E Hinton S Osindero and Y-W Teh ldquoA fast learningalgorithm for deep belief netsrdquo Neural Computation vol 18no 7 pp 1527ndash1554 2006

[24] L C Mao and J Zhao ldquoA survey on the new generation ofdeep learning in image processingrdquo IEEE Access vol 7pp 172231ndash172263 2019

[25] R Udendhran M Balamurugan A Suresh andR Varatharajan ldquoEnhancing image processing architectureusing deep learning for embedded vision systemsrdquo Micro-processors and Microsystems vol 76 p 103094 2020

[26] R J G Van Sloun R Cohen and Y C Eldar ldquoDeep learningin ultrasound imagingrdquo Proceedings of the IEEE vol 108no 1 pp 11ndash29 2020

[27] J Liu C Wu and J Wang ldquoGated recurrent units basedneural network for time heterogeneous feedback recom-mendationrdquo Information Sciences vol 423 pp 50ndash65 2018

[28] Y Hao Y Sheng and J Wang ldquoVariant gated recurrent unitswith encoders to preprocess packets for payload-aware in-trusion detectionrdquo IEEE Access vol 7 pp 49985ndash49998 2019

[29] F T Dezaki Z B Liao C Luong et al ldquoCardiac phasedetection in echocardiograms with densely gated recurrentneural networks and global extrema lossrdquo IEEE Transactionson Medical Imaging vol 38 no 8 pp 1821ndash1832 2019

[30] ZWang Y DongW Liu and ZMa ldquoA novel fault diagnosisapproach for chillers based on 1-D convolutional neuralnetwork and gated recurrent unitrdquo Sensors vol 20 no 9p 2458 2020

12 Mathematical Problems in Engineering

[31] H Colonius ldquoAn invitation to coupling and copulas withapplications to multisensory modelingrdquo Journal of Mathe-matical Psychology vol 74 pp 2ndash10 2016

[32] N Uyttendaele ldquoOn the estimation of nested Archimedeancopulas a theoretical and an experimental comparisonrdquoComputational Statistics vol 33 no 2 pp 1047ndash1070 2018

[33] P Vincent-Lamarre M Calderini and J-P )iviergeldquoLearning long temporal sequences in spiking networks bymultiplexing neural oscillationsrdquo Frontiers in ComputationalNeuroscience vol 14 p 78 2020

[34] W Huang Y Li and Y Huang ldquoDeep Hybrid NeuralNetwork and Improved Differential Neuroevolution forChaotic Time Series Predictionrdquo IEEE Access vol 8pp 159552ndash159565 2020

[35] Y Wu M Yuan S Dong L Lin and Y Liu ldquoRemaininguseful life estimation of engineered systems using vanillaLSTM neural networksrdquo Neurocomputing vol 275 pp 167ndash179 2018

[36] H Yan Y Qin S Xiang Y Wang and H Chen ldquoLong-termgear life prediction based on ordered neurons LSTM neuralnetworksrdquo Measurement vol 165 p 108205 2020

[37] X Shi J Wang G Liu L Yang X Ge and S Jiang ldquoAp-plication of extreme learning machine and neural networks intotal organic carbon content prediction in organic shale withwire line logsrdquo Journal of Natural Gas Science and Engi-neering vol 33 pp 687ndash702 2016

Mathematical Problems in Engineering 13

Page 5: Research Article On a Deep Learning Method of Estimating ...theporosityandcoreanalysisdata,andtheresultsaregreatly affected by human factors. Nuclear magnetic resonance (NMR) porosity

characteristics of different geological periods and theirchange trends include important information of physicalparameters [14] When using the traditional statisticalanalysis and the conventional machine learning method topredict the porosity parameters it is easy to destroy thepotential internal relations in the historical series of loggingparameters and reduce the accuracy of the prediction resultsUnlike other machine learning or deep learning methodsGRU neural network has long-term memory ability [35] Bydealing with the long-term dependence between series dataGRU neural network can effectively reduce the influence ofsuch relationships and its internal control mechanism canalso automatically learn time-series features [36] Figure 4shows the structure of the three-layer GRU neural networkmodel

As can be seen from Figure 4 the structure of the GRUneural network model includes input layer hidden layerand output layer in which the hidden layer is the core part ofnetwork structure In the process of training it is necessaryto optimize and adjust the super parameters of the GRUneural network model structure including the number ofhidden layer layers and the number of hidden layer neurons)eoretically the more hidden layers and the number ofneurons the better the model performance the deeper andmore complex the network the higher the prediction ac-curacy However some studies [13 18] have shown that toomany hidden layer numbers and neuron numbers will leadto training difficulties and over fitting which will reduce theprediction accuracy of the model If the network is tooshallow and simple it will easily lead to insufficient fittingand fail to meet the expected requirements )erefore theselection of the number of hidden layers and the number ofneurons is very important to the prediction performance ofthe network We need to balance the learning ability of thenetwork with the complexity of the training and the re-quirements of the prediction accuracy and determine theoptimal value of nodes and number of hidden layersaccording to experience and many experimental results Inaddition the optimization of training parameters such aslearning rate batch size and maximum iteration times canreduce the complexity of the model to a certain extent andimprove the convergence speed and prediction accuracy ofthe model

)e training process of GRU neural network can beroughly divided into three steps as follows

Step 1 input training data into the network calculatethe output of GRU neural network unit from shallowlayer to deep layer along the forward propagation di-rection and obtain the predicted output value corre-sponding to the input data at the current time pointStep 2 the error of each neuron is calculated along theback-propagation direction Error back-propagation ofthe GRU neural network includes propagation alongthe time sequence and the propagation layer by layerbetween the network layersStep 3 the gradient of each weight is calculatedaccording to the error calculated by back-propagation

and the parameters of the weight gradient adjustmentnetwork are calculated by using the learning rate-adaptive optimization algorithm (Adam algorithm)Finally repeat the above steps to iteratively optimizethe network

26 Prediction Model Based on CA-GRU )e modelingprocess of the combination forecasting model based on CA-GRU is shown in Figure 5 which mainly includes the fol-lowing six steps

Step 1 the obtained logging curves and porosity pa-rameters are used as the database )e Kendall rankcorrelation coefficient τ Spearman rank correlationcoefficient ρ and Pearson linear correlation coefficientP based on Copula function are used to quantitativelycalculate and analyze the correlation degree betweenthem and the logging curves sensitive to the predictionparameters are selected to form new sample dataStep 2 the new sample data are standardized anddivided into the training set and test set according to acertain proportionStep 3 the GRU neural network model is constructedfor porosity prediction and the network parameters areinitialized )e number of network layers and thenumber of hidden layer neurons are determinedaccording to the experimentStep 4 during the training process the networkstructure is continuously optimized until the trainingerror of the model reaches the preset target and thenthe model is savedStep 5 test the trained GRU neural network modelwith the segmented test set and deformalize the pre-dicted value of the model to obtain the predicted valueof porosity parameters corresponding to the actualvalueStep 6 the predicted value is compared with the actualvalue and the error is analyzed )e prediction

h0

x0 x1

h1

GRU3 GRU3 GRU3

GRU3 GRU3 GRU3

GRU3 GRU3 GRU3

hellip

hellip

hellip

hellip

hellip

Hiddenlayer

Input layer

Output layer

ht

xt

Figure 4 Structure diagram of three-layer GRU neural networkmodel

Mathematical Problems in Engineering 5

performance of the model is evaluated according to thecorresponding evaluation index

3 Data Processing and Analysis

31Data Preparation As shown in Figure 6(a) Ordos Basinis a large superimposed basin in the central part of Chinawith huge oil and gas resources In China it is considered asone of the basins with the greatest potential for oil and gasreserves and production growth in China and it is a pet-roliferous basin with stratigraphic and lithologic traps as itsmain structural traps )e accumulated exploration ofnatural gas reserves in Ordos Basin is about 27times1012m3and the geological resources to be explored are about125times1012m3 which indicates that Ordos Basin is still in theearly stage of exploration In addition Ordos Basin is alsorich in novel oil and gas resources such as coalbed methaneshale gas and tight sandstone gas Additionally the amountof geology resources of coalbedmethane in the basin is about986times1012m3 of which the recoverable amount is about179times1012m3 of recoverable )e amount of geologicalresources of shale gas in the basin is about 53times1012m3 ofwhich the recoverable reserves are about 291times 1012m3 )egeological resources of tight sandstone gas in the basin areabout 784times1012m3 of which the recoverable amount isabout 353times1012m3 [18] For conventional or unconven-tional oil and gas resources it can be seen that Ordos Basinhas great exploration potential)erefore a fast and accuratemethod is needed to obtain porosity information which isan important parameter for oil and gas exploration

As shown in Figure 6(b) well E1 is regarded as shale gaswell in the northeastern part of Ordos Basin China It is avery important step for the database including logging datato prepare for the construction of the model In this studyavailable well logs of well E1 include the spontaneous po-tential (SP) compensated neutron log (CNL) compressionalwave slowness (DTC) resistivity (RT) Uranium (U) naturalgamma-ray (GR) bulk density (DEN) Potassium (K) and)orium (TH) Table 1 shows the summary of the recordedlogging parameters data for well E1

Figure 7 presents the logging parameters plot for well E1)is study is very vital and meaningful to oil and gas ex-ploration areas since it is very important to obtain porosityinformation based on logging parameters in reservoirevaluation

32 Data Analysis For the model it is very important toselect suitable logging input when preparing for accurateporosity prediction In this study Kendall rank correlationcoefficient τ Spearman rank correlation coefficient ρ andPearson linear correlation coefficient R based on Copulafunction are used to quantitatively calculate the correlationbetween logging parameters and porosity parameters )ecomparison of absolute values of three correlation coeffi-cients is shown in Figure 8

As can be seen from Figure 8 Pearson correlationanalysis often ignores the nonlinear correlation betweenlogging parameters and porosity In the correlation analysisbetween logging data and porosity the correlation coeffi-cients of DTC DEN and CNL with porosity are relatively

Start

Data

Correlation analysis

Data preprocessing

Training set and testing set

Testing set Training set

Model parameter initialization

Devising GRU model

Training GRU model

Achieving expected results

Save model

Adjusting parameter

Testing the trained model

Get the prediction results

No

Yes

Figure 5 Modeling flow chart of porosity parameter prediction model based on CA-GRU

6 Mathematical Problems in Engineering

high indicating that DTC DEN CNL and porosity have astrong correlation with porosity Although the Pearsoncorrelation coefficient of GR and porosity is relatively lowhowever the correlation analysis method based on Copulafunction obtains higher τ and ρ which shows that the linearcorrelation between GR and porosity is low but the non-linear correlation is high and there is a strong nonlinearcorrelation between them To sum up linear correlation andnonlinear correlation analysis methods are used to optimizeconventional logging parameters in this study Finally fourlogging parameters DTC DEN CNL and GR are selectedas independent variables of network modeling and a po-rosity prediction model is established

4 Results and Discussion

As mentioned above a set of 5165 data points from well E1 hasbeen used to build themodel)is set of data is divided into testand training subsets by depth In this study the training subsetconsists of the first 3874 of all data while the test subset oftesting consists of the rest data points In comparison RNNGRU CA-GRU and multiple linear regression (MLR) modelswere selected and applied to predict porosity

In the beginning all data are standardized ranging fromzero to one with the following equation (11) is used to testand train data sets in the RNN GRU and CA-GRU modelsIn addition the convergence of neural nets may be

BeijingCityStudy area

0 800 1600km 0 38000 76000

km

Weibei uplif

Qin yang

Hua chiBelt

Margin

Western

Ji yuan

Ding bian Jin bian

Yu lin

Wu shen qi

Well E1 Shen mu

Belt

Fold

Jinxi

Yimengi uplift

Hang jin qiErdos shi

Yin chuan

35deg

36deg

37deg

38deg

39deg

40deg

35deg

36deg

37deg

38deg

39deg

40deg

107deg 108deg 109deg 110deg

107deg 108deg 109deg 110deg

Basin boundary faultTectonic element boundaryWell E1City

N N

(a) (b)

Figure 6 (a))e position of Ordos Basin in China (b) the structural belts of Ordos Basin and the sampling well near the Yimeng uplift [15]

Table 1 Summary of the recorded logging parameters for well E1

SP (mV) GR (API) TDC (μsft) RT (Ωmiddotm) U (ppm) K () TH (ppm) DEN (gcm3) CNL ()Miv 6764 2065 5442 944 209 040 171 245 739Mav 7498 6675 9156 5201 736 233 1190 272 2727Average 7217 4404 7433 2329 439 108 543 257 1618SD 138 774 689 709 108 031 177 004 378Mivminimum value Mavmaximum value SD standard deviation

Mathematical Problems in Engineering 7

guaranteed by the preprocessing which made the calculatingspeed of network methods increase

xlowast

x minus xmin

xmax minus xmin (11)

where xlowast is the normalized data x is the original data themaximum and minimum of the original dataset are rep-resented by xmax and xmin respectively

In the study various concepts related to statistics includingcorrelation coefficient (R) mean absolute error (MAE) variance

accounted for (VAF) and root mean square error (RMSE) areused to compare performance prediction In addition theseperformance indicators could provide a sense of how great theperformance of the prediction model is in terms of the actualvalue )e following equations give the above standard tools

R criteria can be described as follows

R

1 minus1113936

ni1 (1113954p minus p)i1113858 1113859

2

1113936ni1 (1113954p)

2i minus (1n) 1113936

ni1 (1113954p)

2i

11139741113972

(12)

MAE criteria may be depicted as follows

MAE 1n

1113944

n

i1(1113954p minus p) (13)

VAF is usually used to assess the accuracy of one modelthrough making a comparison between the assessed valuesand the evaluated output of the model and VAF criteria canbe computed with this equation as follows

VAF 1 minusvar (1113954p)i minus (p)i1113858 1113859

var (1113954p)i1113858 11138591113888 1113889 times 100 (14)

)e RMSE is traditionally applied to monitor the qualityof error function of the model )e performance of themodel increases with RMSE decreasing RMSE criteria canbe computed with this equation as follows

RMSE

1n

1113944

n

i1(1113954p)i minus (p)i1113858 1113859

2

11139741113972

(15)

where for equations (12)ndash(15) p is the measured porosity 1113954p

denotes the assessed porosity while n is the amount of testingdata points

0 5 10 70 75 40 80 2 4 6 8 60 90 24 26 10 20 0 1 2 0 20 40 60

0 5 10 70 75 40 80 2 4 6 8 60 90 24 26 10 20 0 1 2 0 20 40 60

3440

3420

3400

3380

3360

3340

Dep

th (m

)TH (10ndash6) U (10ndash6)SP (mv) GR (API) DTC (μs) DEN (gcm3) CNL () K () RT (Ωm)

Figure 7 Logging parameters plot for well E1

SP GR U DTC DEN CNL KR TTHLogging parameters

0

005

01

015

02

025

03

035

04

045

Corr

elat

ion

coef

ficie

nt

ρR

T

Figure 8 Correlation between logging data and porosityparameter

8 Mathematical Problems in Engineering

In the study of machine learning robustness is a keycharacteristic Considering that the method of selectingtraining and testing data sets avoids the impact on the ro-bustness of GRU we randomly selected ten training andtesting set from 5165 data points in well E1 )e randomlyselected sets are shown in Figure 9(a) where the colorrepresents the corresponding depth of the sample For eachof these ten cases GRU can be modeled and its RMSE of thetraining set and the testing set can be calculated respectivelyand the results are shown in Figure 9(b)

As shown in Figure 9(b) the difference of trainingsamples will lead to inconsistent prediction errors of GRU)erefore it can be concluded that the selection of trainingsamples will bring about changes in GRU prediction errorsIn order to further analyze the influence of different trainingsamples of GRU forecasting error we use statisticalmethods )e above ten cases of RMSE data have carried onthe single factor analysis of variance (the confidence level of5) and the analysis results are shown in Table 2P 0569gt 005 indicating that there are no significantdifferences between RMSE data of 10 cases )roughcomparative analysis we can safely conclude that differenttraining samples had differences in the prediction error ofGRU but there is no significant difference in statistics Itmeans from a statistical point of view the selection methodof training and testing samples had little influence on therobustness of GRU)erefore this study divides the trainingsample set and the test set according to the depth orderwhich conforms to the statistical law and is feasible andpractical

As we all know determining the parameters of the neuralnetwork model is the premise of the successful constructionof the network model In this study adaptive learning rateoptimization algorithm (Adam algorithm) is used to opti-mize the network Adam algorithm combines the advantagesof RMSProp algorithm and AdaGrad algorithm and can

design the independent adaptive learning rate for differentparameters According to the test the best learning rate is0005 the batch is 10 and the time step is 50 According tothe previous studies [18 37] on neural networks the numberof hidden layers and hidden layer nodes has great influenceon the prediction performance of neural networks Fordifferent research fields the number of hidden layers andhidden layer nodes are different Choosing the optimalnumber of hidden layers and hidden layer nodes is the key toensure the prediction accuracy of neural network model)erefore based on the ergodic optimization thinking thisstudy sets the hidden layer value range as [1 10] and thehidden layer node value range as [1 100] By comparing theroot mean square error (RMSE) of the model under differenthidden layers and hidden layer nodes the hidden layer valueand hidden layer node value of the model under the min-imum root mean square error (RMSE) are obtained )eoptimization test results are shown in Figure 10 It can beseen from the figure that too many or too few hidden layersand neurons in the network will lead to drastic changes inroot mean square error (RMSE) of prediction resultsresulting in the decrease of prediction accuracy )roughtraversal optimization the optimal number of hidden layersand nodes for this study is 3 and 41 respectively

According to the above correlation analysis results fourlogging parameters DTC DEN CNL and GR which havestrong correlation with porosity are selected as themodeling

Table 2 )e results of one-way ANOVA

Sum ofsquares df Mean

square F P value

Betweengroups 0077 9 0009 0883 0569

Within groups 0097 10 0010Total 0174 19

3340

3360

3380

3400

3420

3440

Dep

th (m

)

Case

1

Case

3

Case

4

Case

5

Case

6

Case

7

Case

8

Case

9

Case

10

Case

2

Random sets

500045004000350030002500200015001000

500Se

eds

(a)

002040608

112141618

2

RMSE

Case

2

Case

3

Case

4

Case

5

Case

6

Case

7

Case

8

Case

9

Case

10

Case

1

Random sets

TrainingTesting

(b)

Figure 9 Results of ELM with randomly selected training set (a) Ten cases of randomly selected sets (b) )e RMSE of training and testingfor each case

Mathematical Problems in Engineering 9

independent variables of the porosity prediction modelFigure 11 shows the comparison between the actual andpredicted porosity of RNN GRU California-GRU and

MLR models with the depth of E1 well as well as the localenlargement operation It shows that the results of MLRmodels are quite different from the measured porosity while

2

18

16

14

12

2181614121

RMSE

108

64

2

Number of hidden layers Number of hidden layer nodes20

40 6080

100

Figure 10 Model performance under different hidden layers and node numbers

Porosity ()0 5 10 15 20 25

3330

3360

3390

3420

3450

Dep

th (m

)

3370

3375

3380

3385

3390

MeasuredPredicted

0 5 10

(a)

Porosity ()0 5 10 15 20 25

3330

3360

3390

3420

3450

Dep

th (m

)3370

3375

3380

3385

3390

MeasuredPredicted

0 5 10

(b)

Porosity ()0 5 10 15 20 25

3330

3360

3390

3420

3450

Dep

th (m

)

3370

3375

3380

3385

3390

MeasuredPredicted

0 5 10

(c)

Porosity ()0 5 10 15 20 25

3330

3360

3390

3420

3450

Dep

th (m

)

3370

3375

3380

3385

3390

MeasuredPredicted

0 5 10

(d)

Figure 11 A comparison between measured and predicted porosity with RNN GRU CA-GRU and MLR models respectively in well E1(a) A comparison between measured and RNN predicted porosity (b) A comparison between measured and GRU predicted porosity (c) Acomparison between measured and CA-GRU predicted porosity (d) A comparison between measured and MLR predicted porosity

10 Mathematical Problems in Engineering

the results of RNN GRU and California-GRU models arevery consistent with the actual situation

Eventually the statistical indicators mentioned previ-ously were also employed to carry out this comparison andoutcomes of this comparison were presented in Table 3What can be shown in Table 3 was that the three kinds ofporosity prediction models based on deep learning methodwere far superior to those established with the traditionalmultiple linear regression (MLR) method in their predictionaccuracy And by comparing the prediction precision of thethree types deep learning models (RNN GRU and CA-GRU) established in this study it can be evidently shownthat the GRU and CA-GRU models were superior to RNNmodel in prediction precision among which CA-GRU tookthe best precision due to the highest R of 09423 and VAF of887578 and the lowestMAE of 02101 and RMSE of 11412)rough the comparison between the prediction precision ofCA-GRU and GRU there was just slight discrepancy be-tween their precision

To conclude both CA-GRU and GRU models couldprovide a successful porosity prediction performance )eCA-GRU model showed a higher precision in predictingporosity and from Table 3 which we can conclude that CA-GRU modeled in this study took a higher efficiency inpredicting the porosity with its high precision And theverification results in well E1 prove that the CA-GRUmodelwith optimal inputs could be regarded as an efficient tool forpredicting the porosity in particular in the area where thehigh precision porosity data is required

5 Conclusions

Logging parameters reflect the sedimentary characteristics ofacoustic discharge and electric in different geological pe-riods Porosity is the characteristic response of differentformations and has strong time series characteristics Di-rectly predicting porosity by using logging parameters caneffectively reduce the high cost of using special methods suchas rock physical analysis and nuclear magnetic resonancewhich can provide accurate and low-cost decision-makingbasis for the petroleum exploration and development in oil-gas field Deep learning technology can find the nonlinearrelationship between different parameters completely fromthe data which is very suitable for solving the problem ofnonlinear geophysical interpretation It can not only makefull use of the response characteristics of various loggingparameters to different formations at the same time but alsocan get rid of the limitations of linear prediction of tradi-tional empirical formula

Considering the time series characteristics of loggingparameters and porosity parameters a reservoir porosityprediction method based on GRU neural network isproposed in this study By using correlation analysismethod based on Copula function to select sensitivelogging parameters and then uses GRU neural network tobuild prediction model which not only considers theinfluence of strong correlation sample data on the pre-diction of porosity parameters but also takes into accountthe nonlinear mapping relationship between porosityparameters and logging curves as well as the changetrend and correlation of logging information with depth)e correlation measure method based on Copulafunction can optimize the well logging curves which aresensitive to porosity parameters reduce the dimension ofmodel input eliminate the redundancy between vari-ables and improve the overall prediction performance ofthe model )e research results show that the neuralnetwork model of GRU neural network has strong featureextraction ability and can effectively extract deep char-acteristics reflecting porosity parameters from loggingdata Compared with deep learning models such asmultiple linear regression analysis it can predict porosityparameters more accurately and has strong robustnessand anti-interference ability )is study provides a newidea for accurate interpretation of logging data in oil-gasfield exploration and development

Abbreviations

DNN Deep neural networkGRU Gated recurrent unitNMR Nuclear magnetic resonanceCNN Convolutional neural networkRNN Recurrent neural networkSAE Stack autoencoderU UraniumDEN Bulk densityTH )oriumMAE Mean absolute errorRMSE Root mean square errorRNN Recurrent neural networkLSTM Long short-term memoryCA Correlation analysisMLR Multiple linear regressionSP Spontaneous potentialCNL Compensated neutron logDTC Compressional wave slownessRT ResistivityGR Natural gamma-rayK PotassiumR Correlation coefficientVAF Variance accounted forCA-GRU Correlation analysis-gated recurrent unit

Data Availability

)e data used to support the findings of this study areavailable from the corresponding author upon request

Table 3 Comprising CA-GRU model with RNN GRU and MLRmodels by using four performance indicators including correlationcoefficient (R) mean absolute error (MAE) variance accounted for(VAF) as well as root mean square error (RMSE)

Method R MAE VAF() RMSERNN 09034 02728 815798 14952GRU 0928 02303 861089 12643CA-GRU 09423 02101 887578 11412MLR 08489 09035 717449 22589

Mathematical Problems in Engineering 11

Conflicts of Interest

)e authors declare that there are no conflicts of interest

Acknowledgments

)e authors are grateful for the financial support providedby the National Natural Science Foundation of China(41504041) )e authors would also like to thank XiaoyanDeng for assisting in preparation of this manuscript par-ticularly grateful to her for checking grammar style andsyntax in the manuscript

References

[1] S Singh A I Kanli and S Sevgen ldquoA general approach forporosity estimation using artificial neural network method acase study from Kansas gas fieldrdquo Studia Geophysica etGeodaetica vol 60 no 1 pp 130ndash140 2016

[2] Z Zhong and T R Carr ldquoApplication of a new hybrid particleswarm optimization-mixed kernels function-based supportvector machine model for reservoir porosity prediction Acase study in Jacksonburg-Stringtown oil field West VirginiaUSArdquo Interpretation vol 7 no 1 pp T97ndashT112 2019

[3] A Bakhorji H Mustafa and S Aramco Rock PhysicsModeling and Analysis of Elastic Signatures for Intermediate toLow Porosity Sandstones pp 1ndash5 Society of ExplorationGeophysicists Houston TX USA 2012

[4] S Tao Z Pan S Chen and S Tang ldquoCoal seam porosity andfracture heterogeneity of marcolithotypes in the FanzhuangBlock southern Qinshui Basin Chinardquo Journal of NaturalGas Science and Engineering vol 66 pp 148ndash158 2019

[5] S Tao X Zhao D Tang C Deng Q Meng and Y Cui ldquoAmodel for characterizing the continuous distribution of gasstoring space in low-rank coalsrdquo Fuel vol 233 pp 552ndash5572018

[6] S Tao S Chen D Tang X Zhao H Xu and S Li ldquoMaterialcomposition pore structure and adsorption capacity of low-rank coals around the first coalification jump A case ofeastern Junggar Basin Chinardquo Fuel vol 211 pp 804ndash8152018

[7] C Zhang X Wang and L Q Zhu ldquoEstimation of totalporosity in shale formations from element capture loggingand conventional logging datardquo Arabian Journal of Geo-sciences vol 11 no 11 p 264 2018

[8] A Khaksar and C M Griffiths ldquolowastPorosity form sonic log ingas-bearing shaly sandstones field data versus empiricalequationsrdquo Exploration Geophysics vol 29 no 3-4pp 440ndash446 1998

[9] C Guo C Zhang and L Zhu ldquoPredicting the total porosity ofshale gas reservoirsrdquo Petroleum Science and Technologyvol 35 no 10 pp 1022ndash1031 2017

[10] H L Crow R J Enkin J B Percival and H A J RussellldquoDownhole nuclear magnetic resonance logging in glacio-marine sediments near Ottawa Ontario Canadardquo NearSurface Geophysics vol 18 no 6 p 591 2020

[11] J O Parra C Hackert M Bennett and H A CollierldquoPermeability and porosity images based on NMR sonic andseismic reflectivity application to a carbonate aquiferrdquo CeLeading Edge vol 22 no 11 pp 1102ndash1108 2003

[12] E P Leite and A C Vidal ldquo3D porosity prediction fromseismic inversion and neural networksrdquo Computers amp Geo-sciences vol 37 no 8 pp 1174ndash1180 2011

[13] X Shi G Liu Y Cheng et al ldquoBrittleness index prediction inshale gas reservoirs based on efficient network modelsrdquoJournal of Natural Gas Science and Engineering vol 35pp 673ndash685 2016

[14] P Wang S Peng and T He ldquoA novel approach to totalorganic carbon content prediction in shale gas reservoirs withwell logs data Tonghua Basin Chinardquo Journal of Natural GasScience and Engineering vol 55 pp 1ndash15 2018

[15] P Wang and S Peng ldquoA new scheme to improve the per-formance of artificial intelligence techniques for estimatingtotal organic carbon from well logsrdquo Energies vol 11 no 4p 747 2018

[16] F S Feng P Wang Z Wei et al ldquoA new method for pre-dicting the permeability of sandstone in deep reservoirsrdquoGeofluids vol 2020 Article ID 8844464 16 pages 2020

[17] A A Talkhestani ldquoPrediction of effective porosity fromseismic attributes using locally linear model tree algorithmrdquoGeophysical Prospecting vol 63 no 3 pp 680ndash693 2015

[18] P Wang and S Peng ldquoOn a new method of estimating shearwave velocity from conventional well logsrdquo Journal of Pe-troleum Science and Engineering vol 180 pp 105ndash123 2019

[19] F S T Haklidir and M Haklidir ldquoPrediction of reservoirtemperatures using hydrogeochemical data western anatoliageothermal systems (Turkey) a machine learning approachrdquoNatural Resources Research vol 29 no 4 pp 2333ndash23462020

[20] A A Mahmoud S Elkatatny and D Al Shehri ldquoApplicationof machine learning in evaluation of the static youngrsquosmodulus for sandstone formationsrdquo Sustainability vol 12no 5 p 1880 2020

[21] M He H Gu and H Wan ldquoLog interpretation for lithologyand fluid identification using deep neural network combinedwith MAHAKIL in a tight sandstone reservoirrdquo Journal ofPetroleum Science and Engineering vol 194 p 107498 2020

[22] Y Bengio ldquoLearning deep architectures for AIrdquo Foundationsand Trends in Machine Learning vol 2 no 1 pp 1ndash127 2009

[23] G E Hinton S Osindero and Y-W Teh ldquoA fast learningalgorithm for deep belief netsrdquo Neural Computation vol 18no 7 pp 1527ndash1554 2006

[24] L C Mao and J Zhao ldquoA survey on the new generation ofdeep learning in image processingrdquo IEEE Access vol 7pp 172231ndash172263 2019

[25] R Udendhran M Balamurugan A Suresh andR Varatharajan ldquoEnhancing image processing architectureusing deep learning for embedded vision systemsrdquo Micro-processors and Microsystems vol 76 p 103094 2020

[26] R J G Van Sloun R Cohen and Y C Eldar ldquoDeep learningin ultrasound imagingrdquo Proceedings of the IEEE vol 108no 1 pp 11ndash29 2020

[27] J Liu C Wu and J Wang ldquoGated recurrent units basedneural network for time heterogeneous feedback recom-mendationrdquo Information Sciences vol 423 pp 50ndash65 2018

[28] Y Hao Y Sheng and J Wang ldquoVariant gated recurrent unitswith encoders to preprocess packets for payload-aware in-trusion detectionrdquo IEEE Access vol 7 pp 49985ndash49998 2019

[29] F T Dezaki Z B Liao C Luong et al ldquoCardiac phasedetection in echocardiograms with densely gated recurrentneural networks and global extrema lossrdquo IEEE Transactionson Medical Imaging vol 38 no 8 pp 1821ndash1832 2019

[30] ZWang Y DongW Liu and ZMa ldquoA novel fault diagnosisapproach for chillers based on 1-D convolutional neuralnetwork and gated recurrent unitrdquo Sensors vol 20 no 9p 2458 2020

12 Mathematical Problems in Engineering

[31] H Colonius ldquoAn invitation to coupling and copulas withapplications to multisensory modelingrdquo Journal of Mathe-matical Psychology vol 74 pp 2ndash10 2016

[32] N Uyttendaele ldquoOn the estimation of nested Archimedeancopulas a theoretical and an experimental comparisonrdquoComputational Statistics vol 33 no 2 pp 1047ndash1070 2018

[33] P Vincent-Lamarre M Calderini and J-P )iviergeldquoLearning long temporal sequences in spiking networks bymultiplexing neural oscillationsrdquo Frontiers in ComputationalNeuroscience vol 14 p 78 2020

[34] W Huang Y Li and Y Huang ldquoDeep Hybrid NeuralNetwork and Improved Differential Neuroevolution forChaotic Time Series Predictionrdquo IEEE Access vol 8pp 159552ndash159565 2020

[35] Y Wu M Yuan S Dong L Lin and Y Liu ldquoRemaininguseful life estimation of engineered systems using vanillaLSTM neural networksrdquo Neurocomputing vol 275 pp 167ndash179 2018

[36] H Yan Y Qin S Xiang Y Wang and H Chen ldquoLong-termgear life prediction based on ordered neurons LSTM neuralnetworksrdquo Measurement vol 165 p 108205 2020

[37] X Shi J Wang G Liu L Yang X Ge and S Jiang ldquoAp-plication of extreme learning machine and neural networks intotal organic carbon content prediction in organic shale withwire line logsrdquo Journal of Natural Gas Science and Engi-neering vol 33 pp 687ndash702 2016

Mathematical Problems in Engineering 13

Page 6: Research Article On a Deep Learning Method of Estimating ...theporosityandcoreanalysisdata,andtheresultsaregreatly affected by human factors. Nuclear magnetic resonance (NMR) porosity

performance of the model is evaluated according to thecorresponding evaluation index

3 Data Processing and Analysis

31Data Preparation As shown in Figure 6(a) Ordos Basinis a large superimposed basin in the central part of Chinawith huge oil and gas resources In China it is considered asone of the basins with the greatest potential for oil and gasreserves and production growth in China and it is a pet-roliferous basin with stratigraphic and lithologic traps as itsmain structural traps )e accumulated exploration ofnatural gas reserves in Ordos Basin is about 27times1012m3and the geological resources to be explored are about125times1012m3 which indicates that Ordos Basin is still in theearly stage of exploration In addition Ordos Basin is alsorich in novel oil and gas resources such as coalbed methaneshale gas and tight sandstone gas Additionally the amountof geology resources of coalbedmethane in the basin is about986times1012m3 of which the recoverable amount is about179times1012m3 of recoverable )e amount of geologicalresources of shale gas in the basin is about 53times1012m3 ofwhich the recoverable reserves are about 291times 1012m3 )egeological resources of tight sandstone gas in the basin areabout 784times1012m3 of which the recoverable amount isabout 353times1012m3 [18] For conventional or unconven-tional oil and gas resources it can be seen that Ordos Basinhas great exploration potential)erefore a fast and accuratemethod is needed to obtain porosity information which isan important parameter for oil and gas exploration

As shown in Figure 6(b) well E1 is regarded as shale gaswell in the northeastern part of Ordos Basin China It is avery important step for the database including logging datato prepare for the construction of the model In this studyavailable well logs of well E1 include the spontaneous po-tential (SP) compensated neutron log (CNL) compressionalwave slowness (DTC) resistivity (RT) Uranium (U) naturalgamma-ray (GR) bulk density (DEN) Potassium (K) and)orium (TH) Table 1 shows the summary of the recordedlogging parameters data for well E1

Figure 7 presents the logging parameters plot for well E1)is study is very vital and meaningful to oil and gas ex-ploration areas since it is very important to obtain porosityinformation based on logging parameters in reservoirevaluation

32 Data Analysis For the model it is very important toselect suitable logging input when preparing for accurateporosity prediction In this study Kendall rank correlationcoefficient τ Spearman rank correlation coefficient ρ andPearson linear correlation coefficient R based on Copulafunction are used to quantitatively calculate the correlationbetween logging parameters and porosity parameters )ecomparison of absolute values of three correlation coeffi-cients is shown in Figure 8

As can be seen from Figure 8 Pearson correlationanalysis often ignores the nonlinear correlation betweenlogging parameters and porosity In the correlation analysisbetween logging data and porosity the correlation coeffi-cients of DTC DEN and CNL with porosity are relatively

Start

Data

Correlation analysis

Data preprocessing

Training set and testing set

Testing set Training set

Model parameter initialization

Devising GRU model

Training GRU model

Achieving expected results

Save model

Adjusting parameter

Testing the trained model

Get the prediction results

No

Yes

Figure 5 Modeling flow chart of porosity parameter prediction model based on CA-GRU

6 Mathematical Problems in Engineering

high indicating that DTC DEN CNL and porosity have astrong correlation with porosity Although the Pearsoncorrelation coefficient of GR and porosity is relatively lowhowever the correlation analysis method based on Copulafunction obtains higher τ and ρ which shows that the linearcorrelation between GR and porosity is low but the non-linear correlation is high and there is a strong nonlinearcorrelation between them To sum up linear correlation andnonlinear correlation analysis methods are used to optimizeconventional logging parameters in this study Finally fourlogging parameters DTC DEN CNL and GR are selectedas independent variables of network modeling and a po-rosity prediction model is established

4 Results and Discussion

As mentioned above a set of 5165 data points from well E1 hasbeen used to build themodel)is set of data is divided into testand training subsets by depth In this study the training subsetconsists of the first 3874 of all data while the test subset oftesting consists of the rest data points In comparison RNNGRU CA-GRU and multiple linear regression (MLR) modelswere selected and applied to predict porosity

In the beginning all data are standardized ranging fromzero to one with the following equation (11) is used to testand train data sets in the RNN GRU and CA-GRU modelsIn addition the convergence of neural nets may be

BeijingCityStudy area

0 800 1600km 0 38000 76000

km

Weibei uplif

Qin yang

Hua chiBelt

Margin

Western

Ji yuan

Ding bian Jin bian

Yu lin

Wu shen qi

Well E1 Shen mu

Belt

Fold

Jinxi

Yimengi uplift

Hang jin qiErdos shi

Yin chuan

35deg

36deg

37deg

38deg

39deg

40deg

35deg

36deg

37deg

38deg

39deg

40deg

107deg 108deg 109deg 110deg

107deg 108deg 109deg 110deg

Basin boundary faultTectonic element boundaryWell E1City

N N

(a) (b)

Figure 6 (a))e position of Ordos Basin in China (b) the structural belts of Ordos Basin and the sampling well near the Yimeng uplift [15]

Table 1 Summary of the recorded logging parameters for well E1

SP (mV) GR (API) TDC (μsft) RT (Ωmiddotm) U (ppm) K () TH (ppm) DEN (gcm3) CNL ()Miv 6764 2065 5442 944 209 040 171 245 739Mav 7498 6675 9156 5201 736 233 1190 272 2727Average 7217 4404 7433 2329 439 108 543 257 1618SD 138 774 689 709 108 031 177 004 378Mivminimum value Mavmaximum value SD standard deviation

Mathematical Problems in Engineering 7

guaranteed by the preprocessing which made the calculatingspeed of network methods increase

xlowast

x minus xmin

xmax minus xmin (11)

where xlowast is the normalized data x is the original data themaximum and minimum of the original dataset are rep-resented by xmax and xmin respectively

In the study various concepts related to statistics includingcorrelation coefficient (R) mean absolute error (MAE) variance

accounted for (VAF) and root mean square error (RMSE) areused to compare performance prediction In addition theseperformance indicators could provide a sense of how great theperformance of the prediction model is in terms of the actualvalue )e following equations give the above standard tools

R criteria can be described as follows

R

1 minus1113936

ni1 (1113954p minus p)i1113858 1113859

2

1113936ni1 (1113954p)

2i minus (1n) 1113936

ni1 (1113954p)

2i

11139741113972

(12)

MAE criteria may be depicted as follows

MAE 1n

1113944

n

i1(1113954p minus p) (13)

VAF is usually used to assess the accuracy of one modelthrough making a comparison between the assessed valuesand the evaluated output of the model and VAF criteria canbe computed with this equation as follows

VAF 1 minusvar (1113954p)i minus (p)i1113858 1113859

var (1113954p)i1113858 11138591113888 1113889 times 100 (14)

)e RMSE is traditionally applied to monitor the qualityof error function of the model )e performance of themodel increases with RMSE decreasing RMSE criteria canbe computed with this equation as follows

RMSE

1n

1113944

n

i1(1113954p)i minus (p)i1113858 1113859

2

11139741113972

(15)

where for equations (12)ndash(15) p is the measured porosity 1113954p

denotes the assessed porosity while n is the amount of testingdata points

0 5 10 70 75 40 80 2 4 6 8 60 90 24 26 10 20 0 1 2 0 20 40 60

0 5 10 70 75 40 80 2 4 6 8 60 90 24 26 10 20 0 1 2 0 20 40 60

3440

3420

3400

3380

3360

3340

Dep

th (m

)TH (10ndash6) U (10ndash6)SP (mv) GR (API) DTC (μs) DEN (gcm3) CNL () K () RT (Ωm)

Figure 7 Logging parameters plot for well E1

SP GR U DTC DEN CNL KR TTHLogging parameters

0

005

01

015

02

025

03

035

04

045

Corr

elat

ion

coef

ficie

nt

ρR

T

Figure 8 Correlation between logging data and porosityparameter

8 Mathematical Problems in Engineering

In the study of machine learning robustness is a keycharacteristic Considering that the method of selectingtraining and testing data sets avoids the impact on the ro-bustness of GRU we randomly selected ten training andtesting set from 5165 data points in well E1 )e randomlyselected sets are shown in Figure 9(a) where the colorrepresents the corresponding depth of the sample For eachof these ten cases GRU can be modeled and its RMSE of thetraining set and the testing set can be calculated respectivelyand the results are shown in Figure 9(b)

As shown in Figure 9(b) the difference of trainingsamples will lead to inconsistent prediction errors of GRU)erefore it can be concluded that the selection of trainingsamples will bring about changes in GRU prediction errorsIn order to further analyze the influence of different trainingsamples of GRU forecasting error we use statisticalmethods )e above ten cases of RMSE data have carried onthe single factor analysis of variance (the confidence level of5) and the analysis results are shown in Table 2P 0569gt 005 indicating that there are no significantdifferences between RMSE data of 10 cases )roughcomparative analysis we can safely conclude that differenttraining samples had differences in the prediction error ofGRU but there is no significant difference in statistics Itmeans from a statistical point of view the selection methodof training and testing samples had little influence on therobustness of GRU)erefore this study divides the trainingsample set and the test set according to the depth orderwhich conforms to the statistical law and is feasible andpractical

As we all know determining the parameters of the neuralnetwork model is the premise of the successful constructionof the network model In this study adaptive learning rateoptimization algorithm (Adam algorithm) is used to opti-mize the network Adam algorithm combines the advantagesof RMSProp algorithm and AdaGrad algorithm and can

design the independent adaptive learning rate for differentparameters According to the test the best learning rate is0005 the batch is 10 and the time step is 50 According tothe previous studies [18 37] on neural networks the numberof hidden layers and hidden layer nodes has great influenceon the prediction performance of neural networks Fordifferent research fields the number of hidden layers andhidden layer nodes are different Choosing the optimalnumber of hidden layers and hidden layer nodes is the key toensure the prediction accuracy of neural network model)erefore based on the ergodic optimization thinking thisstudy sets the hidden layer value range as [1 10] and thehidden layer node value range as [1 100] By comparing theroot mean square error (RMSE) of the model under differenthidden layers and hidden layer nodes the hidden layer valueand hidden layer node value of the model under the min-imum root mean square error (RMSE) are obtained )eoptimization test results are shown in Figure 10 It can beseen from the figure that too many or too few hidden layersand neurons in the network will lead to drastic changes inroot mean square error (RMSE) of prediction resultsresulting in the decrease of prediction accuracy )roughtraversal optimization the optimal number of hidden layersand nodes for this study is 3 and 41 respectively

According to the above correlation analysis results fourlogging parameters DTC DEN CNL and GR which havestrong correlation with porosity are selected as themodeling

Table 2 )e results of one-way ANOVA

Sum ofsquares df Mean

square F P value

Betweengroups 0077 9 0009 0883 0569

Within groups 0097 10 0010Total 0174 19

3340

3360

3380

3400

3420

3440

Dep

th (m

)

Case

1

Case

3

Case

4

Case

5

Case

6

Case

7

Case

8

Case

9

Case

10

Case

2

Random sets

500045004000350030002500200015001000

500Se

eds

(a)

002040608

112141618

2

RMSE

Case

2

Case

3

Case

4

Case

5

Case

6

Case

7

Case

8

Case

9

Case

10

Case

1

Random sets

TrainingTesting

(b)

Figure 9 Results of ELM with randomly selected training set (a) Ten cases of randomly selected sets (b) )e RMSE of training and testingfor each case

Mathematical Problems in Engineering 9

independent variables of the porosity prediction modelFigure 11 shows the comparison between the actual andpredicted porosity of RNN GRU California-GRU and

MLR models with the depth of E1 well as well as the localenlargement operation It shows that the results of MLRmodels are quite different from the measured porosity while

2

18

16

14

12

2181614121

RMSE

108

64

2

Number of hidden layers Number of hidden layer nodes20

40 6080

100

Figure 10 Model performance under different hidden layers and node numbers

Porosity ()0 5 10 15 20 25

3330

3360

3390

3420

3450

Dep

th (m

)

3370

3375

3380

3385

3390

MeasuredPredicted

0 5 10

(a)

Porosity ()0 5 10 15 20 25

3330

3360

3390

3420

3450

Dep

th (m

)3370

3375

3380

3385

3390

MeasuredPredicted

0 5 10

(b)

Porosity ()0 5 10 15 20 25

3330

3360

3390

3420

3450

Dep

th (m

)

3370

3375

3380

3385

3390

MeasuredPredicted

0 5 10

(c)

Porosity ()0 5 10 15 20 25

3330

3360

3390

3420

3450

Dep

th (m

)

3370

3375

3380

3385

3390

MeasuredPredicted

0 5 10

(d)

Figure 11 A comparison between measured and predicted porosity with RNN GRU CA-GRU and MLR models respectively in well E1(a) A comparison between measured and RNN predicted porosity (b) A comparison between measured and GRU predicted porosity (c) Acomparison between measured and CA-GRU predicted porosity (d) A comparison between measured and MLR predicted porosity

10 Mathematical Problems in Engineering

the results of RNN GRU and California-GRU models arevery consistent with the actual situation

Eventually the statistical indicators mentioned previ-ously were also employed to carry out this comparison andoutcomes of this comparison were presented in Table 3What can be shown in Table 3 was that the three kinds ofporosity prediction models based on deep learning methodwere far superior to those established with the traditionalmultiple linear regression (MLR) method in their predictionaccuracy And by comparing the prediction precision of thethree types deep learning models (RNN GRU and CA-GRU) established in this study it can be evidently shownthat the GRU and CA-GRU models were superior to RNNmodel in prediction precision among which CA-GRU tookthe best precision due to the highest R of 09423 and VAF of887578 and the lowestMAE of 02101 and RMSE of 11412)rough the comparison between the prediction precision ofCA-GRU and GRU there was just slight discrepancy be-tween their precision

To conclude both CA-GRU and GRU models couldprovide a successful porosity prediction performance )eCA-GRU model showed a higher precision in predictingporosity and from Table 3 which we can conclude that CA-GRU modeled in this study took a higher efficiency inpredicting the porosity with its high precision And theverification results in well E1 prove that the CA-GRUmodelwith optimal inputs could be regarded as an efficient tool forpredicting the porosity in particular in the area where thehigh precision porosity data is required

5 Conclusions

Logging parameters reflect the sedimentary characteristics ofacoustic discharge and electric in different geological pe-riods Porosity is the characteristic response of differentformations and has strong time series characteristics Di-rectly predicting porosity by using logging parameters caneffectively reduce the high cost of using special methods suchas rock physical analysis and nuclear magnetic resonancewhich can provide accurate and low-cost decision-makingbasis for the petroleum exploration and development in oil-gas field Deep learning technology can find the nonlinearrelationship between different parameters completely fromthe data which is very suitable for solving the problem ofnonlinear geophysical interpretation It can not only makefull use of the response characteristics of various loggingparameters to different formations at the same time but alsocan get rid of the limitations of linear prediction of tradi-tional empirical formula

Considering the time series characteristics of loggingparameters and porosity parameters a reservoir porosityprediction method based on GRU neural network isproposed in this study By using correlation analysismethod based on Copula function to select sensitivelogging parameters and then uses GRU neural network tobuild prediction model which not only considers theinfluence of strong correlation sample data on the pre-diction of porosity parameters but also takes into accountthe nonlinear mapping relationship between porosityparameters and logging curves as well as the changetrend and correlation of logging information with depth)e correlation measure method based on Copulafunction can optimize the well logging curves which aresensitive to porosity parameters reduce the dimension ofmodel input eliminate the redundancy between vari-ables and improve the overall prediction performance ofthe model )e research results show that the neuralnetwork model of GRU neural network has strong featureextraction ability and can effectively extract deep char-acteristics reflecting porosity parameters from loggingdata Compared with deep learning models such asmultiple linear regression analysis it can predict porosityparameters more accurately and has strong robustnessand anti-interference ability )is study provides a newidea for accurate interpretation of logging data in oil-gasfield exploration and development

Abbreviations

DNN Deep neural networkGRU Gated recurrent unitNMR Nuclear magnetic resonanceCNN Convolutional neural networkRNN Recurrent neural networkSAE Stack autoencoderU UraniumDEN Bulk densityTH )oriumMAE Mean absolute errorRMSE Root mean square errorRNN Recurrent neural networkLSTM Long short-term memoryCA Correlation analysisMLR Multiple linear regressionSP Spontaneous potentialCNL Compensated neutron logDTC Compressional wave slownessRT ResistivityGR Natural gamma-rayK PotassiumR Correlation coefficientVAF Variance accounted forCA-GRU Correlation analysis-gated recurrent unit

Data Availability

)e data used to support the findings of this study areavailable from the corresponding author upon request

Table 3 Comprising CA-GRU model with RNN GRU and MLRmodels by using four performance indicators including correlationcoefficient (R) mean absolute error (MAE) variance accounted for(VAF) as well as root mean square error (RMSE)

Method R MAE VAF() RMSERNN 09034 02728 815798 14952GRU 0928 02303 861089 12643CA-GRU 09423 02101 887578 11412MLR 08489 09035 717449 22589

Mathematical Problems in Engineering 11

Conflicts of Interest

)e authors declare that there are no conflicts of interest

Acknowledgments

)e authors are grateful for the financial support providedby the National Natural Science Foundation of China(41504041) )e authors would also like to thank XiaoyanDeng for assisting in preparation of this manuscript par-ticularly grateful to her for checking grammar style andsyntax in the manuscript

References

[1] S Singh A I Kanli and S Sevgen ldquoA general approach forporosity estimation using artificial neural network method acase study from Kansas gas fieldrdquo Studia Geophysica etGeodaetica vol 60 no 1 pp 130ndash140 2016

[2] Z Zhong and T R Carr ldquoApplication of a new hybrid particleswarm optimization-mixed kernels function-based supportvector machine model for reservoir porosity prediction Acase study in Jacksonburg-Stringtown oil field West VirginiaUSArdquo Interpretation vol 7 no 1 pp T97ndashT112 2019

[3] A Bakhorji H Mustafa and S Aramco Rock PhysicsModeling and Analysis of Elastic Signatures for Intermediate toLow Porosity Sandstones pp 1ndash5 Society of ExplorationGeophysicists Houston TX USA 2012

[4] S Tao Z Pan S Chen and S Tang ldquoCoal seam porosity andfracture heterogeneity of marcolithotypes in the FanzhuangBlock southern Qinshui Basin Chinardquo Journal of NaturalGas Science and Engineering vol 66 pp 148ndash158 2019

[5] S Tao X Zhao D Tang C Deng Q Meng and Y Cui ldquoAmodel for characterizing the continuous distribution of gasstoring space in low-rank coalsrdquo Fuel vol 233 pp 552ndash5572018

[6] S Tao S Chen D Tang X Zhao H Xu and S Li ldquoMaterialcomposition pore structure and adsorption capacity of low-rank coals around the first coalification jump A case ofeastern Junggar Basin Chinardquo Fuel vol 211 pp 804ndash8152018

[7] C Zhang X Wang and L Q Zhu ldquoEstimation of totalporosity in shale formations from element capture loggingand conventional logging datardquo Arabian Journal of Geo-sciences vol 11 no 11 p 264 2018

[8] A Khaksar and C M Griffiths ldquolowastPorosity form sonic log ingas-bearing shaly sandstones field data versus empiricalequationsrdquo Exploration Geophysics vol 29 no 3-4pp 440ndash446 1998

[9] C Guo C Zhang and L Zhu ldquoPredicting the total porosity ofshale gas reservoirsrdquo Petroleum Science and Technologyvol 35 no 10 pp 1022ndash1031 2017

[10] H L Crow R J Enkin J B Percival and H A J RussellldquoDownhole nuclear magnetic resonance logging in glacio-marine sediments near Ottawa Ontario Canadardquo NearSurface Geophysics vol 18 no 6 p 591 2020

[11] J O Parra C Hackert M Bennett and H A CollierldquoPermeability and porosity images based on NMR sonic andseismic reflectivity application to a carbonate aquiferrdquo CeLeading Edge vol 22 no 11 pp 1102ndash1108 2003

[12] E P Leite and A C Vidal ldquo3D porosity prediction fromseismic inversion and neural networksrdquo Computers amp Geo-sciences vol 37 no 8 pp 1174ndash1180 2011

[13] X Shi G Liu Y Cheng et al ldquoBrittleness index prediction inshale gas reservoirs based on efficient network modelsrdquoJournal of Natural Gas Science and Engineering vol 35pp 673ndash685 2016

[14] P Wang S Peng and T He ldquoA novel approach to totalorganic carbon content prediction in shale gas reservoirs withwell logs data Tonghua Basin Chinardquo Journal of Natural GasScience and Engineering vol 55 pp 1ndash15 2018

[15] P Wang and S Peng ldquoA new scheme to improve the per-formance of artificial intelligence techniques for estimatingtotal organic carbon from well logsrdquo Energies vol 11 no 4p 747 2018

[16] F S Feng P Wang Z Wei et al ldquoA new method for pre-dicting the permeability of sandstone in deep reservoirsrdquoGeofluids vol 2020 Article ID 8844464 16 pages 2020

[17] A A Talkhestani ldquoPrediction of effective porosity fromseismic attributes using locally linear model tree algorithmrdquoGeophysical Prospecting vol 63 no 3 pp 680ndash693 2015

[18] P Wang and S Peng ldquoOn a new method of estimating shearwave velocity from conventional well logsrdquo Journal of Pe-troleum Science and Engineering vol 180 pp 105ndash123 2019

[19] F S T Haklidir and M Haklidir ldquoPrediction of reservoirtemperatures using hydrogeochemical data western anatoliageothermal systems (Turkey) a machine learning approachrdquoNatural Resources Research vol 29 no 4 pp 2333ndash23462020

[20] A A Mahmoud S Elkatatny and D Al Shehri ldquoApplicationof machine learning in evaluation of the static youngrsquosmodulus for sandstone formationsrdquo Sustainability vol 12no 5 p 1880 2020

[21] M He H Gu and H Wan ldquoLog interpretation for lithologyand fluid identification using deep neural network combinedwith MAHAKIL in a tight sandstone reservoirrdquo Journal ofPetroleum Science and Engineering vol 194 p 107498 2020

[22] Y Bengio ldquoLearning deep architectures for AIrdquo Foundationsand Trends in Machine Learning vol 2 no 1 pp 1ndash127 2009

[23] G E Hinton S Osindero and Y-W Teh ldquoA fast learningalgorithm for deep belief netsrdquo Neural Computation vol 18no 7 pp 1527ndash1554 2006

[24] L C Mao and J Zhao ldquoA survey on the new generation ofdeep learning in image processingrdquo IEEE Access vol 7pp 172231ndash172263 2019

[25] R Udendhran M Balamurugan A Suresh andR Varatharajan ldquoEnhancing image processing architectureusing deep learning for embedded vision systemsrdquo Micro-processors and Microsystems vol 76 p 103094 2020

[26] R J G Van Sloun R Cohen and Y C Eldar ldquoDeep learningin ultrasound imagingrdquo Proceedings of the IEEE vol 108no 1 pp 11ndash29 2020

[27] J Liu C Wu and J Wang ldquoGated recurrent units basedneural network for time heterogeneous feedback recom-mendationrdquo Information Sciences vol 423 pp 50ndash65 2018

[28] Y Hao Y Sheng and J Wang ldquoVariant gated recurrent unitswith encoders to preprocess packets for payload-aware in-trusion detectionrdquo IEEE Access vol 7 pp 49985ndash49998 2019

[29] F T Dezaki Z B Liao C Luong et al ldquoCardiac phasedetection in echocardiograms with densely gated recurrentneural networks and global extrema lossrdquo IEEE Transactionson Medical Imaging vol 38 no 8 pp 1821ndash1832 2019

[30] ZWang Y DongW Liu and ZMa ldquoA novel fault diagnosisapproach for chillers based on 1-D convolutional neuralnetwork and gated recurrent unitrdquo Sensors vol 20 no 9p 2458 2020

12 Mathematical Problems in Engineering

[31] H Colonius ldquoAn invitation to coupling and copulas withapplications to multisensory modelingrdquo Journal of Mathe-matical Psychology vol 74 pp 2ndash10 2016

[32] N Uyttendaele ldquoOn the estimation of nested Archimedeancopulas a theoretical and an experimental comparisonrdquoComputational Statistics vol 33 no 2 pp 1047ndash1070 2018

[33] P Vincent-Lamarre M Calderini and J-P )iviergeldquoLearning long temporal sequences in spiking networks bymultiplexing neural oscillationsrdquo Frontiers in ComputationalNeuroscience vol 14 p 78 2020

[34] W Huang Y Li and Y Huang ldquoDeep Hybrid NeuralNetwork and Improved Differential Neuroevolution forChaotic Time Series Predictionrdquo IEEE Access vol 8pp 159552ndash159565 2020

[35] Y Wu M Yuan S Dong L Lin and Y Liu ldquoRemaininguseful life estimation of engineered systems using vanillaLSTM neural networksrdquo Neurocomputing vol 275 pp 167ndash179 2018

[36] H Yan Y Qin S Xiang Y Wang and H Chen ldquoLong-termgear life prediction based on ordered neurons LSTM neuralnetworksrdquo Measurement vol 165 p 108205 2020

[37] X Shi J Wang G Liu L Yang X Ge and S Jiang ldquoAp-plication of extreme learning machine and neural networks intotal organic carbon content prediction in organic shale withwire line logsrdquo Journal of Natural Gas Science and Engi-neering vol 33 pp 687ndash702 2016

Mathematical Problems in Engineering 13

Page 7: Research Article On a Deep Learning Method of Estimating ...theporosityandcoreanalysisdata,andtheresultsaregreatly affected by human factors. Nuclear magnetic resonance (NMR) porosity

high indicating that DTC DEN CNL and porosity have astrong correlation with porosity Although the Pearsoncorrelation coefficient of GR and porosity is relatively lowhowever the correlation analysis method based on Copulafunction obtains higher τ and ρ which shows that the linearcorrelation between GR and porosity is low but the non-linear correlation is high and there is a strong nonlinearcorrelation between them To sum up linear correlation andnonlinear correlation analysis methods are used to optimizeconventional logging parameters in this study Finally fourlogging parameters DTC DEN CNL and GR are selectedas independent variables of network modeling and a po-rosity prediction model is established

4 Results and Discussion

As mentioned above a set of 5165 data points from well E1 hasbeen used to build themodel)is set of data is divided into testand training subsets by depth In this study the training subsetconsists of the first 3874 of all data while the test subset oftesting consists of the rest data points In comparison RNNGRU CA-GRU and multiple linear regression (MLR) modelswere selected and applied to predict porosity

In the beginning all data are standardized ranging fromzero to one with the following equation (11) is used to testand train data sets in the RNN GRU and CA-GRU modelsIn addition the convergence of neural nets may be

BeijingCityStudy area

0 800 1600km 0 38000 76000

km

Weibei uplif

Qin yang

Hua chiBelt

Margin

Western

Ji yuan

Ding bian Jin bian

Yu lin

Wu shen qi

Well E1 Shen mu

Belt

Fold

Jinxi

Yimengi uplift

Hang jin qiErdos shi

Yin chuan

35deg

36deg

37deg

38deg

39deg

40deg

35deg

36deg

37deg

38deg

39deg

40deg

107deg 108deg 109deg 110deg

107deg 108deg 109deg 110deg

Basin boundary faultTectonic element boundaryWell E1City

N N

(a) (b)

Figure 6 (a))e position of Ordos Basin in China (b) the structural belts of Ordos Basin and the sampling well near the Yimeng uplift [15]

Table 1 Summary of the recorded logging parameters for well E1

SP (mV) GR (API) TDC (μsft) RT (Ωmiddotm) U (ppm) K () TH (ppm) DEN (gcm3) CNL ()Miv 6764 2065 5442 944 209 040 171 245 739Mav 7498 6675 9156 5201 736 233 1190 272 2727Average 7217 4404 7433 2329 439 108 543 257 1618SD 138 774 689 709 108 031 177 004 378Mivminimum value Mavmaximum value SD standard deviation

Mathematical Problems in Engineering 7

guaranteed by the preprocessing which made the calculatingspeed of network methods increase

xlowast

x minus xmin

xmax minus xmin (11)

where xlowast is the normalized data x is the original data themaximum and minimum of the original dataset are rep-resented by xmax and xmin respectively

In the study various concepts related to statistics includingcorrelation coefficient (R) mean absolute error (MAE) variance

accounted for (VAF) and root mean square error (RMSE) areused to compare performance prediction In addition theseperformance indicators could provide a sense of how great theperformance of the prediction model is in terms of the actualvalue )e following equations give the above standard tools

R criteria can be described as follows

R

1 minus1113936

ni1 (1113954p minus p)i1113858 1113859

2

1113936ni1 (1113954p)

2i minus (1n) 1113936

ni1 (1113954p)

2i

11139741113972

(12)

MAE criteria may be depicted as follows

MAE 1n

1113944

n

i1(1113954p minus p) (13)

VAF is usually used to assess the accuracy of one modelthrough making a comparison between the assessed valuesand the evaluated output of the model and VAF criteria canbe computed with this equation as follows

VAF 1 minusvar (1113954p)i minus (p)i1113858 1113859

var (1113954p)i1113858 11138591113888 1113889 times 100 (14)

)e RMSE is traditionally applied to monitor the qualityof error function of the model )e performance of themodel increases with RMSE decreasing RMSE criteria canbe computed with this equation as follows

RMSE

1n

1113944

n

i1(1113954p)i minus (p)i1113858 1113859

2

11139741113972

(15)

where for equations (12)ndash(15) p is the measured porosity 1113954p

denotes the assessed porosity while n is the amount of testingdata points

0 5 10 70 75 40 80 2 4 6 8 60 90 24 26 10 20 0 1 2 0 20 40 60

0 5 10 70 75 40 80 2 4 6 8 60 90 24 26 10 20 0 1 2 0 20 40 60

3440

3420

3400

3380

3360

3340

Dep

th (m

)TH (10ndash6) U (10ndash6)SP (mv) GR (API) DTC (μs) DEN (gcm3) CNL () K () RT (Ωm)

Figure 7 Logging parameters plot for well E1

SP GR U DTC DEN CNL KR TTHLogging parameters

0

005

01

015

02

025

03

035

04

045

Corr

elat

ion

coef

ficie

nt

ρR

T

Figure 8 Correlation between logging data and porosityparameter

8 Mathematical Problems in Engineering

In the study of machine learning robustness is a keycharacteristic Considering that the method of selectingtraining and testing data sets avoids the impact on the ro-bustness of GRU we randomly selected ten training andtesting set from 5165 data points in well E1 )e randomlyselected sets are shown in Figure 9(a) where the colorrepresents the corresponding depth of the sample For eachof these ten cases GRU can be modeled and its RMSE of thetraining set and the testing set can be calculated respectivelyand the results are shown in Figure 9(b)

As shown in Figure 9(b) the difference of trainingsamples will lead to inconsistent prediction errors of GRU)erefore it can be concluded that the selection of trainingsamples will bring about changes in GRU prediction errorsIn order to further analyze the influence of different trainingsamples of GRU forecasting error we use statisticalmethods )e above ten cases of RMSE data have carried onthe single factor analysis of variance (the confidence level of5) and the analysis results are shown in Table 2P 0569gt 005 indicating that there are no significantdifferences between RMSE data of 10 cases )roughcomparative analysis we can safely conclude that differenttraining samples had differences in the prediction error ofGRU but there is no significant difference in statistics Itmeans from a statistical point of view the selection methodof training and testing samples had little influence on therobustness of GRU)erefore this study divides the trainingsample set and the test set according to the depth orderwhich conforms to the statistical law and is feasible andpractical

As we all know determining the parameters of the neuralnetwork model is the premise of the successful constructionof the network model In this study adaptive learning rateoptimization algorithm (Adam algorithm) is used to opti-mize the network Adam algorithm combines the advantagesof RMSProp algorithm and AdaGrad algorithm and can

design the independent adaptive learning rate for differentparameters According to the test the best learning rate is0005 the batch is 10 and the time step is 50 According tothe previous studies [18 37] on neural networks the numberof hidden layers and hidden layer nodes has great influenceon the prediction performance of neural networks Fordifferent research fields the number of hidden layers andhidden layer nodes are different Choosing the optimalnumber of hidden layers and hidden layer nodes is the key toensure the prediction accuracy of neural network model)erefore based on the ergodic optimization thinking thisstudy sets the hidden layer value range as [1 10] and thehidden layer node value range as [1 100] By comparing theroot mean square error (RMSE) of the model under differenthidden layers and hidden layer nodes the hidden layer valueand hidden layer node value of the model under the min-imum root mean square error (RMSE) are obtained )eoptimization test results are shown in Figure 10 It can beseen from the figure that too many or too few hidden layersand neurons in the network will lead to drastic changes inroot mean square error (RMSE) of prediction resultsresulting in the decrease of prediction accuracy )roughtraversal optimization the optimal number of hidden layersand nodes for this study is 3 and 41 respectively

According to the above correlation analysis results fourlogging parameters DTC DEN CNL and GR which havestrong correlation with porosity are selected as themodeling

Table 2 )e results of one-way ANOVA

Sum ofsquares df Mean

square F P value

Betweengroups 0077 9 0009 0883 0569

Within groups 0097 10 0010Total 0174 19

3340

3360

3380

3400

3420

3440

Dep

th (m

)

Case

1

Case

3

Case

4

Case

5

Case

6

Case

7

Case

8

Case

9

Case

10

Case

2

Random sets

500045004000350030002500200015001000

500Se

eds

(a)

002040608

112141618

2

RMSE

Case

2

Case

3

Case

4

Case

5

Case

6

Case

7

Case

8

Case

9

Case

10

Case

1

Random sets

TrainingTesting

(b)

Figure 9 Results of ELM with randomly selected training set (a) Ten cases of randomly selected sets (b) )e RMSE of training and testingfor each case

Mathematical Problems in Engineering 9

independent variables of the porosity prediction modelFigure 11 shows the comparison between the actual andpredicted porosity of RNN GRU California-GRU and

MLR models with the depth of E1 well as well as the localenlargement operation It shows that the results of MLRmodels are quite different from the measured porosity while

2

18

16

14

12

2181614121

RMSE

108

64

2

Number of hidden layers Number of hidden layer nodes20

40 6080

100

Figure 10 Model performance under different hidden layers and node numbers

Porosity ()0 5 10 15 20 25

3330

3360

3390

3420

3450

Dep

th (m

)

3370

3375

3380

3385

3390

MeasuredPredicted

0 5 10

(a)

Porosity ()0 5 10 15 20 25

3330

3360

3390

3420

3450

Dep

th (m

)3370

3375

3380

3385

3390

MeasuredPredicted

0 5 10

(b)

Porosity ()0 5 10 15 20 25

3330

3360

3390

3420

3450

Dep

th (m

)

3370

3375

3380

3385

3390

MeasuredPredicted

0 5 10

(c)

Porosity ()0 5 10 15 20 25

3330

3360

3390

3420

3450

Dep

th (m

)

3370

3375

3380

3385

3390

MeasuredPredicted

0 5 10

(d)

Figure 11 A comparison between measured and predicted porosity with RNN GRU CA-GRU and MLR models respectively in well E1(a) A comparison between measured and RNN predicted porosity (b) A comparison between measured and GRU predicted porosity (c) Acomparison between measured and CA-GRU predicted porosity (d) A comparison between measured and MLR predicted porosity

10 Mathematical Problems in Engineering

the results of RNN GRU and California-GRU models arevery consistent with the actual situation

Eventually the statistical indicators mentioned previ-ously were also employed to carry out this comparison andoutcomes of this comparison were presented in Table 3What can be shown in Table 3 was that the three kinds ofporosity prediction models based on deep learning methodwere far superior to those established with the traditionalmultiple linear regression (MLR) method in their predictionaccuracy And by comparing the prediction precision of thethree types deep learning models (RNN GRU and CA-GRU) established in this study it can be evidently shownthat the GRU and CA-GRU models were superior to RNNmodel in prediction precision among which CA-GRU tookthe best precision due to the highest R of 09423 and VAF of887578 and the lowestMAE of 02101 and RMSE of 11412)rough the comparison between the prediction precision ofCA-GRU and GRU there was just slight discrepancy be-tween their precision

To conclude both CA-GRU and GRU models couldprovide a successful porosity prediction performance )eCA-GRU model showed a higher precision in predictingporosity and from Table 3 which we can conclude that CA-GRU modeled in this study took a higher efficiency inpredicting the porosity with its high precision And theverification results in well E1 prove that the CA-GRUmodelwith optimal inputs could be regarded as an efficient tool forpredicting the porosity in particular in the area where thehigh precision porosity data is required

5 Conclusions

Logging parameters reflect the sedimentary characteristics ofacoustic discharge and electric in different geological pe-riods Porosity is the characteristic response of differentformations and has strong time series characteristics Di-rectly predicting porosity by using logging parameters caneffectively reduce the high cost of using special methods suchas rock physical analysis and nuclear magnetic resonancewhich can provide accurate and low-cost decision-makingbasis for the petroleum exploration and development in oil-gas field Deep learning technology can find the nonlinearrelationship between different parameters completely fromthe data which is very suitable for solving the problem ofnonlinear geophysical interpretation It can not only makefull use of the response characteristics of various loggingparameters to different formations at the same time but alsocan get rid of the limitations of linear prediction of tradi-tional empirical formula

Considering the time series characteristics of loggingparameters and porosity parameters a reservoir porosityprediction method based on GRU neural network isproposed in this study By using correlation analysismethod based on Copula function to select sensitivelogging parameters and then uses GRU neural network tobuild prediction model which not only considers theinfluence of strong correlation sample data on the pre-diction of porosity parameters but also takes into accountthe nonlinear mapping relationship between porosityparameters and logging curves as well as the changetrend and correlation of logging information with depth)e correlation measure method based on Copulafunction can optimize the well logging curves which aresensitive to porosity parameters reduce the dimension ofmodel input eliminate the redundancy between vari-ables and improve the overall prediction performance ofthe model )e research results show that the neuralnetwork model of GRU neural network has strong featureextraction ability and can effectively extract deep char-acteristics reflecting porosity parameters from loggingdata Compared with deep learning models such asmultiple linear regression analysis it can predict porosityparameters more accurately and has strong robustnessand anti-interference ability )is study provides a newidea for accurate interpretation of logging data in oil-gasfield exploration and development

Abbreviations

DNN Deep neural networkGRU Gated recurrent unitNMR Nuclear magnetic resonanceCNN Convolutional neural networkRNN Recurrent neural networkSAE Stack autoencoderU UraniumDEN Bulk densityTH )oriumMAE Mean absolute errorRMSE Root mean square errorRNN Recurrent neural networkLSTM Long short-term memoryCA Correlation analysisMLR Multiple linear regressionSP Spontaneous potentialCNL Compensated neutron logDTC Compressional wave slownessRT ResistivityGR Natural gamma-rayK PotassiumR Correlation coefficientVAF Variance accounted forCA-GRU Correlation analysis-gated recurrent unit

Data Availability

)e data used to support the findings of this study areavailable from the corresponding author upon request

Table 3 Comprising CA-GRU model with RNN GRU and MLRmodels by using four performance indicators including correlationcoefficient (R) mean absolute error (MAE) variance accounted for(VAF) as well as root mean square error (RMSE)

Method R MAE VAF() RMSERNN 09034 02728 815798 14952GRU 0928 02303 861089 12643CA-GRU 09423 02101 887578 11412MLR 08489 09035 717449 22589

Mathematical Problems in Engineering 11

Conflicts of Interest

)e authors declare that there are no conflicts of interest

Acknowledgments

)e authors are grateful for the financial support providedby the National Natural Science Foundation of China(41504041) )e authors would also like to thank XiaoyanDeng for assisting in preparation of this manuscript par-ticularly grateful to her for checking grammar style andsyntax in the manuscript

References

[1] S Singh A I Kanli and S Sevgen ldquoA general approach forporosity estimation using artificial neural network method acase study from Kansas gas fieldrdquo Studia Geophysica etGeodaetica vol 60 no 1 pp 130ndash140 2016

[2] Z Zhong and T R Carr ldquoApplication of a new hybrid particleswarm optimization-mixed kernels function-based supportvector machine model for reservoir porosity prediction Acase study in Jacksonburg-Stringtown oil field West VirginiaUSArdquo Interpretation vol 7 no 1 pp T97ndashT112 2019

[3] A Bakhorji H Mustafa and S Aramco Rock PhysicsModeling and Analysis of Elastic Signatures for Intermediate toLow Porosity Sandstones pp 1ndash5 Society of ExplorationGeophysicists Houston TX USA 2012

[4] S Tao Z Pan S Chen and S Tang ldquoCoal seam porosity andfracture heterogeneity of marcolithotypes in the FanzhuangBlock southern Qinshui Basin Chinardquo Journal of NaturalGas Science and Engineering vol 66 pp 148ndash158 2019

[5] S Tao X Zhao D Tang C Deng Q Meng and Y Cui ldquoAmodel for characterizing the continuous distribution of gasstoring space in low-rank coalsrdquo Fuel vol 233 pp 552ndash5572018

[6] S Tao S Chen D Tang X Zhao H Xu and S Li ldquoMaterialcomposition pore structure and adsorption capacity of low-rank coals around the first coalification jump A case ofeastern Junggar Basin Chinardquo Fuel vol 211 pp 804ndash8152018

[7] C Zhang X Wang and L Q Zhu ldquoEstimation of totalporosity in shale formations from element capture loggingand conventional logging datardquo Arabian Journal of Geo-sciences vol 11 no 11 p 264 2018

[8] A Khaksar and C M Griffiths ldquolowastPorosity form sonic log ingas-bearing shaly sandstones field data versus empiricalequationsrdquo Exploration Geophysics vol 29 no 3-4pp 440ndash446 1998

[9] C Guo C Zhang and L Zhu ldquoPredicting the total porosity ofshale gas reservoirsrdquo Petroleum Science and Technologyvol 35 no 10 pp 1022ndash1031 2017

[10] H L Crow R J Enkin J B Percival and H A J RussellldquoDownhole nuclear magnetic resonance logging in glacio-marine sediments near Ottawa Ontario Canadardquo NearSurface Geophysics vol 18 no 6 p 591 2020

[11] J O Parra C Hackert M Bennett and H A CollierldquoPermeability and porosity images based on NMR sonic andseismic reflectivity application to a carbonate aquiferrdquo CeLeading Edge vol 22 no 11 pp 1102ndash1108 2003

[12] E P Leite and A C Vidal ldquo3D porosity prediction fromseismic inversion and neural networksrdquo Computers amp Geo-sciences vol 37 no 8 pp 1174ndash1180 2011

[13] X Shi G Liu Y Cheng et al ldquoBrittleness index prediction inshale gas reservoirs based on efficient network modelsrdquoJournal of Natural Gas Science and Engineering vol 35pp 673ndash685 2016

[14] P Wang S Peng and T He ldquoA novel approach to totalorganic carbon content prediction in shale gas reservoirs withwell logs data Tonghua Basin Chinardquo Journal of Natural GasScience and Engineering vol 55 pp 1ndash15 2018

[15] P Wang and S Peng ldquoA new scheme to improve the per-formance of artificial intelligence techniques for estimatingtotal organic carbon from well logsrdquo Energies vol 11 no 4p 747 2018

[16] F S Feng P Wang Z Wei et al ldquoA new method for pre-dicting the permeability of sandstone in deep reservoirsrdquoGeofluids vol 2020 Article ID 8844464 16 pages 2020

[17] A A Talkhestani ldquoPrediction of effective porosity fromseismic attributes using locally linear model tree algorithmrdquoGeophysical Prospecting vol 63 no 3 pp 680ndash693 2015

[18] P Wang and S Peng ldquoOn a new method of estimating shearwave velocity from conventional well logsrdquo Journal of Pe-troleum Science and Engineering vol 180 pp 105ndash123 2019

[19] F S T Haklidir and M Haklidir ldquoPrediction of reservoirtemperatures using hydrogeochemical data western anatoliageothermal systems (Turkey) a machine learning approachrdquoNatural Resources Research vol 29 no 4 pp 2333ndash23462020

[20] A A Mahmoud S Elkatatny and D Al Shehri ldquoApplicationof machine learning in evaluation of the static youngrsquosmodulus for sandstone formationsrdquo Sustainability vol 12no 5 p 1880 2020

[21] M He H Gu and H Wan ldquoLog interpretation for lithologyand fluid identification using deep neural network combinedwith MAHAKIL in a tight sandstone reservoirrdquo Journal ofPetroleum Science and Engineering vol 194 p 107498 2020

[22] Y Bengio ldquoLearning deep architectures for AIrdquo Foundationsand Trends in Machine Learning vol 2 no 1 pp 1ndash127 2009

[23] G E Hinton S Osindero and Y-W Teh ldquoA fast learningalgorithm for deep belief netsrdquo Neural Computation vol 18no 7 pp 1527ndash1554 2006

[24] L C Mao and J Zhao ldquoA survey on the new generation ofdeep learning in image processingrdquo IEEE Access vol 7pp 172231ndash172263 2019

[25] R Udendhran M Balamurugan A Suresh andR Varatharajan ldquoEnhancing image processing architectureusing deep learning for embedded vision systemsrdquo Micro-processors and Microsystems vol 76 p 103094 2020

[26] R J G Van Sloun R Cohen and Y C Eldar ldquoDeep learningin ultrasound imagingrdquo Proceedings of the IEEE vol 108no 1 pp 11ndash29 2020

[27] J Liu C Wu and J Wang ldquoGated recurrent units basedneural network for time heterogeneous feedback recom-mendationrdquo Information Sciences vol 423 pp 50ndash65 2018

[28] Y Hao Y Sheng and J Wang ldquoVariant gated recurrent unitswith encoders to preprocess packets for payload-aware in-trusion detectionrdquo IEEE Access vol 7 pp 49985ndash49998 2019

[29] F T Dezaki Z B Liao C Luong et al ldquoCardiac phasedetection in echocardiograms with densely gated recurrentneural networks and global extrema lossrdquo IEEE Transactionson Medical Imaging vol 38 no 8 pp 1821ndash1832 2019

[30] ZWang Y DongW Liu and ZMa ldquoA novel fault diagnosisapproach for chillers based on 1-D convolutional neuralnetwork and gated recurrent unitrdquo Sensors vol 20 no 9p 2458 2020

12 Mathematical Problems in Engineering

[31] H Colonius ldquoAn invitation to coupling and copulas withapplications to multisensory modelingrdquo Journal of Mathe-matical Psychology vol 74 pp 2ndash10 2016

[32] N Uyttendaele ldquoOn the estimation of nested Archimedeancopulas a theoretical and an experimental comparisonrdquoComputational Statistics vol 33 no 2 pp 1047ndash1070 2018

[33] P Vincent-Lamarre M Calderini and J-P )iviergeldquoLearning long temporal sequences in spiking networks bymultiplexing neural oscillationsrdquo Frontiers in ComputationalNeuroscience vol 14 p 78 2020

[34] W Huang Y Li and Y Huang ldquoDeep Hybrid NeuralNetwork and Improved Differential Neuroevolution forChaotic Time Series Predictionrdquo IEEE Access vol 8pp 159552ndash159565 2020

[35] Y Wu M Yuan S Dong L Lin and Y Liu ldquoRemaininguseful life estimation of engineered systems using vanillaLSTM neural networksrdquo Neurocomputing vol 275 pp 167ndash179 2018

[36] H Yan Y Qin S Xiang Y Wang and H Chen ldquoLong-termgear life prediction based on ordered neurons LSTM neuralnetworksrdquo Measurement vol 165 p 108205 2020

[37] X Shi J Wang G Liu L Yang X Ge and S Jiang ldquoAp-plication of extreme learning machine and neural networks intotal organic carbon content prediction in organic shale withwire line logsrdquo Journal of Natural Gas Science and Engi-neering vol 33 pp 687ndash702 2016

Mathematical Problems in Engineering 13

Page 8: Research Article On a Deep Learning Method of Estimating ...theporosityandcoreanalysisdata,andtheresultsaregreatly affected by human factors. Nuclear magnetic resonance (NMR) porosity

guaranteed by the preprocessing which made the calculatingspeed of network methods increase

xlowast

x minus xmin

xmax minus xmin (11)

where xlowast is the normalized data x is the original data themaximum and minimum of the original dataset are rep-resented by xmax and xmin respectively

In the study various concepts related to statistics includingcorrelation coefficient (R) mean absolute error (MAE) variance

accounted for (VAF) and root mean square error (RMSE) areused to compare performance prediction In addition theseperformance indicators could provide a sense of how great theperformance of the prediction model is in terms of the actualvalue )e following equations give the above standard tools

R criteria can be described as follows

R

1 minus1113936

ni1 (1113954p minus p)i1113858 1113859

2

1113936ni1 (1113954p)

2i minus (1n) 1113936

ni1 (1113954p)

2i

11139741113972

(12)

MAE criteria may be depicted as follows

MAE 1n

1113944

n

i1(1113954p minus p) (13)

VAF is usually used to assess the accuracy of one modelthrough making a comparison between the assessed valuesand the evaluated output of the model and VAF criteria canbe computed with this equation as follows

VAF 1 minusvar (1113954p)i minus (p)i1113858 1113859

var (1113954p)i1113858 11138591113888 1113889 times 100 (14)

)e RMSE is traditionally applied to monitor the qualityof error function of the model )e performance of themodel increases with RMSE decreasing RMSE criteria canbe computed with this equation as follows

RMSE

1n

1113944

n

i1(1113954p)i minus (p)i1113858 1113859

2

11139741113972

(15)

where for equations (12)ndash(15) p is the measured porosity 1113954p

denotes the assessed porosity while n is the amount of testingdata points

0 5 10 70 75 40 80 2 4 6 8 60 90 24 26 10 20 0 1 2 0 20 40 60

0 5 10 70 75 40 80 2 4 6 8 60 90 24 26 10 20 0 1 2 0 20 40 60

3440

3420

3400

3380

3360

3340

Dep

th (m

)TH (10ndash6) U (10ndash6)SP (mv) GR (API) DTC (μs) DEN (gcm3) CNL () K () RT (Ωm)

Figure 7 Logging parameters plot for well E1

SP GR U DTC DEN CNL KR TTHLogging parameters

0

005

01

015

02

025

03

035

04

045

Corr

elat

ion

coef

ficie

nt

ρR

T

Figure 8 Correlation between logging data and porosityparameter

8 Mathematical Problems in Engineering

In the study of machine learning robustness is a keycharacteristic Considering that the method of selectingtraining and testing data sets avoids the impact on the ro-bustness of GRU we randomly selected ten training andtesting set from 5165 data points in well E1 )e randomlyselected sets are shown in Figure 9(a) where the colorrepresents the corresponding depth of the sample For eachof these ten cases GRU can be modeled and its RMSE of thetraining set and the testing set can be calculated respectivelyand the results are shown in Figure 9(b)

As shown in Figure 9(b) the difference of trainingsamples will lead to inconsistent prediction errors of GRU)erefore it can be concluded that the selection of trainingsamples will bring about changes in GRU prediction errorsIn order to further analyze the influence of different trainingsamples of GRU forecasting error we use statisticalmethods )e above ten cases of RMSE data have carried onthe single factor analysis of variance (the confidence level of5) and the analysis results are shown in Table 2P 0569gt 005 indicating that there are no significantdifferences between RMSE data of 10 cases )roughcomparative analysis we can safely conclude that differenttraining samples had differences in the prediction error ofGRU but there is no significant difference in statistics Itmeans from a statistical point of view the selection methodof training and testing samples had little influence on therobustness of GRU)erefore this study divides the trainingsample set and the test set according to the depth orderwhich conforms to the statistical law and is feasible andpractical

As we all know determining the parameters of the neuralnetwork model is the premise of the successful constructionof the network model In this study adaptive learning rateoptimization algorithm (Adam algorithm) is used to opti-mize the network Adam algorithm combines the advantagesof RMSProp algorithm and AdaGrad algorithm and can

design the independent adaptive learning rate for differentparameters According to the test the best learning rate is0005 the batch is 10 and the time step is 50 According tothe previous studies [18 37] on neural networks the numberof hidden layers and hidden layer nodes has great influenceon the prediction performance of neural networks Fordifferent research fields the number of hidden layers andhidden layer nodes are different Choosing the optimalnumber of hidden layers and hidden layer nodes is the key toensure the prediction accuracy of neural network model)erefore based on the ergodic optimization thinking thisstudy sets the hidden layer value range as [1 10] and thehidden layer node value range as [1 100] By comparing theroot mean square error (RMSE) of the model under differenthidden layers and hidden layer nodes the hidden layer valueand hidden layer node value of the model under the min-imum root mean square error (RMSE) are obtained )eoptimization test results are shown in Figure 10 It can beseen from the figure that too many or too few hidden layersand neurons in the network will lead to drastic changes inroot mean square error (RMSE) of prediction resultsresulting in the decrease of prediction accuracy )roughtraversal optimization the optimal number of hidden layersand nodes for this study is 3 and 41 respectively

According to the above correlation analysis results fourlogging parameters DTC DEN CNL and GR which havestrong correlation with porosity are selected as themodeling

Table 2 )e results of one-way ANOVA

Sum ofsquares df Mean

square F P value

Betweengroups 0077 9 0009 0883 0569

Within groups 0097 10 0010Total 0174 19

3340

3360

3380

3400

3420

3440

Dep

th (m

)

Case

1

Case

3

Case

4

Case

5

Case

6

Case

7

Case

8

Case

9

Case

10

Case

2

Random sets

500045004000350030002500200015001000

500Se

eds

(a)

002040608

112141618

2

RMSE

Case

2

Case

3

Case

4

Case

5

Case

6

Case

7

Case

8

Case

9

Case

10

Case

1

Random sets

TrainingTesting

(b)

Figure 9 Results of ELM with randomly selected training set (a) Ten cases of randomly selected sets (b) )e RMSE of training and testingfor each case

Mathematical Problems in Engineering 9

independent variables of the porosity prediction modelFigure 11 shows the comparison between the actual andpredicted porosity of RNN GRU California-GRU and

MLR models with the depth of E1 well as well as the localenlargement operation It shows that the results of MLRmodels are quite different from the measured porosity while

2

18

16

14

12

2181614121

RMSE

108

64

2

Number of hidden layers Number of hidden layer nodes20

40 6080

100

Figure 10 Model performance under different hidden layers and node numbers

Porosity ()0 5 10 15 20 25

3330

3360

3390

3420

3450

Dep

th (m

)

3370

3375

3380

3385

3390

MeasuredPredicted

0 5 10

(a)

Porosity ()0 5 10 15 20 25

3330

3360

3390

3420

3450

Dep

th (m

)3370

3375

3380

3385

3390

MeasuredPredicted

0 5 10

(b)

Porosity ()0 5 10 15 20 25

3330

3360

3390

3420

3450

Dep

th (m

)

3370

3375

3380

3385

3390

MeasuredPredicted

0 5 10

(c)

Porosity ()0 5 10 15 20 25

3330

3360

3390

3420

3450

Dep

th (m

)

3370

3375

3380

3385

3390

MeasuredPredicted

0 5 10

(d)

Figure 11 A comparison between measured and predicted porosity with RNN GRU CA-GRU and MLR models respectively in well E1(a) A comparison between measured and RNN predicted porosity (b) A comparison between measured and GRU predicted porosity (c) Acomparison between measured and CA-GRU predicted porosity (d) A comparison between measured and MLR predicted porosity

10 Mathematical Problems in Engineering

the results of RNN GRU and California-GRU models arevery consistent with the actual situation

Eventually the statistical indicators mentioned previ-ously were also employed to carry out this comparison andoutcomes of this comparison were presented in Table 3What can be shown in Table 3 was that the three kinds ofporosity prediction models based on deep learning methodwere far superior to those established with the traditionalmultiple linear regression (MLR) method in their predictionaccuracy And by comparing the prediction precision of thethree types deep learning models (RNN GRU and CA-GRU) established in this study it can be evidently shownthat the GRU and CA-GRU models were superior to RNNmodel in prediction precision among which CA-GRU tookthe best precision due to the highest R of 09423 and VAF of887578 and the lowestMAE of 02101 and RMSE of 11412)rough the comparison between the prediction precision ofCA-GRU and GRU there was just slight discrepancy be-tween their precision

To conclude both CA-GRU and GRU models couldprovide a successful porosity prediction performance )eCA-GRU model showed a higher precision in predictingporosity and from Table 3 which we can conclude that CA-GRU modeled in this study took a higher efficiency inpredicting the porosity with its high precision And theverification results in well E1 prove that the CA-GRUmodelwith optimal inputs could be regarded as an efficient tool forpredicting the porosity in particular in the area where thehigh precision porosity data is required

5 Conclusions

Logging parameters reflect the sedimentary characteristics ofacoustic discharge and electric in different geological pe-riods Porosity is the characteristic response of differentformations and has strong time series characteristics Di-rectly predicting porosity by using logging parameters caneffectively reduce the high cost of using special methods suchas rock physical analysis and nuclear magnetic resonancewhich can provide accurate and low-cost decision-makingbasis for the petroleum exploration and development in oil-gas field Deep learning technology can find the nonlinearrelationship between different parameters completely fromthe data which is very suitable for solving the problem ofnonlinear geophysical interpretation It can not only makefull use of the response characteristics of various loggingparameters to different formations at the same time but alsocan get rid of the limitations of linear prediction of tradi-tional empirical formula

Considering the time series characteristics of loggingparameters and porosity parameters a reservoir porosityprediction method based on GRU neural network isproposed in this study By using correlation analysismethod based on Copula function to select sensitivelogging parameters and then uses GRU neural network tobuild prediction model which not only considers theinfluence of strong correlation sample data on the pre-diction of porosity parameters but also takes into accountthe nonlinear mapping relationship between porosityparameters and logging curves as well as the changetrend and correlation of logging information with depth)e correlation measure method based on Copulafunction can optimize the well logging curves which aresensitive to porosity parameters reduce the dimension ofmodel input eliminate the redundancy between vari-ables and improve the overall prediction performance ofthe model )e research results show that the neuralnetwork model of GRU neural network has strong featureextraction ability and can effectively extract deep char-acteristics reflecting porosity parameters from loggingdata Compared with deep learning models such asmultiple linear regression analysis it can predict porosityparameters more accurately and has strong robustnessand anti-interference ability )is study provides a newidea for accurate interpretation of logging data in oil-gasfield exploration and development

Abbreviations

DNN Deep neural networkGRU Gated recurrent unitNMR Nuclear magnetic resonanceCNN Convolutional neural networkRNN Recurrent neural networkSAE Stack autoencoderU UraniumDEN Bulk densityTH )oriumMAE Mean absolute errorRMSE Root mean square errorRNN Recurrent neural networkLSTM Long short-term memoryCA Correlation analysisMLR Multiple linear regressionSP Spontaneous potentialCNL Compensated neutron logDTC Compressional wave slownessRT ResistivityGR Natural gamma-rayK PotassiumR Correlation coefficientVAF Variance accounted forCA-GRU Correlation analysis-gated recurrent unit

Data Availability

)e data used to support the findings of this study areavailable from the corresponding author upon request

Table 3 Comprising CA-GRU model with RNN GRU and MLRmodels by using four performance indicators including correlationcoefficient (R) mean absolute error (MAE) variance accounted for(VAF) as well as root mean square error (RMSE)

Method R MAE VAF() RMSERNN 09034 02728 815798 14952GRU 0928 02303 861089 12643CA-GRU 09423 02101 887578 11412MLR 08489 09035 717449 22589

Mathematical Problems in Engineering 11

Conflicts of Interest

)e authors declare that there are no conflicts of interest

Acknowledgments

)e authors are grateful for the financial support providedby the National Natural Science Foundation of China(41504041) )e authors would also like to thank XiaoyanDeng for assisting in preparation of this manuscript par-ticularly grateful to her for checking grammar style andsyntax in the manuscript

References

[1] S Singh A I Kanli and S Sevgen ldquoA general approach forporosity estimation using artificial neural network method acase study from Kansas gas fieldrdquo Studia Geophysica etGeodaetica vol 60 no 1 pp 130ndash140 2016

[2] Z Zhong and T R Carr ldquoApplication of a new hybrid particleswarm optimization-mixed kernels function-based supportvector machine model for reservoir porosity prediction Acase study in Jacksonburg-Stringtown oil field West VirginiaUSArdquo Interpretation vol 7 no 1 pp T97ndashT112 2019

[3] A Bakhorji H Mustafa and S Aramco Rock PhysicsModeling and Analysis of Elastic Signatures for Intermediate toLow Porosity Sandstones pp 1ndash5 Society of ExplorationGeophysicists Houston TX USA 2012

[4] S Tao Z Pan S Chen and S Tang ldquoCoal seam porosity andfracture heterogeneity of marcolithotypes in the FanzhuangBlock southern Qinshui Basin Chinardquo Journal of NaturalGas Science and Engineering vol 66 pp 148ndash158 2019

[5] S Tao X Zhao D Tang C Deng Q Meng and Y Cui ldquoAmodel for characterizing the continuous distribution of gasstoring space in low-rank coalsrdquo Fuel vol 233 pp 552ndash5572018

[6] S Tao S Chen D Tang X Zhao H Xu and S Li ldquoMaterialcomposition pore structure and adsorption capacity of low-rank coals around the first coalification jump A case ofeastern Junggar Basin Chinardquo Fuel vol 211 pp 804ndash8152018

[7] C Zhang X Wang and L Q Zhu ldquoEstimation of totalporosity in shale formations from element capture loggingand conventional logging datardquo Arabian Journal of Geo-sciences vol 11 no 11 p 264 2018

[8] A Khaksar and C M Griffiths ldquolowastPorosity form sonic log ingas-bearing shaly sandstones field data versus empiricalequationsrdquo Exploration Geophysics vol 29 no 3-4pp 440ndash446 1998

[9] C Guo C Zhang and L Zhu ldquoPredicting the total porosity ofshale gas reservoirsrdquo Petroleum Science and Technologyvol 35 no 10 pp 1022ndash1031 2017

[10] H L Crow R J Enkin J B Percival and H A J RussellldquoDownhole nuclear magnetic resonance logging in glacio-marine sediments near Ottawa Ontario Canadardquo NearSurface Geophysics vol 18 no 6 p 591 2020

[11] J O Parra C Hackert M Bennett and H A CollierldquoPermeability and porosity images based on NMR sonic andseismic reflectivity application to a carbonate aquiferrdquo CeLeading Edge vol 22 no 11 pp 1102ndash1108 2003

[12] E P Leite and A C Vidal ldquo3D porosity prediction fromseismic inversion and neural networksrdquo Computers amp Geo-sciences vol 37 no 8 pp 1174ndash1180 2011

[13] X Shi G Liu Y Cheng et al ldquoBrittleness index prediction inshale gas reservoirs based on efficient network modelsrdquoJournal of Natural Gas Science and Engineering vol 35pp 673ndash685 2016

[14] P Wang S Peng and T He ldquoA novel approach to totalorganic carbon content prediction in shale gas reservoirs withwell logs data Tonghua Basin Chinardquo Journal of Natural GasScience and Engineering vol 55 pp 1ndash15 2018

[15] P Wang and S Peng ldquoA new scheme to improve the per-formance of artificial intelligence techniques for estimatingtotal organic carbon from well logsrdquo Energies vol 11 no 4p 747 2018

[16] F S Feng P Wang Z Wei et al ldquoA new method for pre-dicting the permeability of sandstone in deep reservoirsrdquoGeofluids vol 2020 Article ID 8844464 16 pages 2020

[17] A A Talkhestani ldquoPrediction of effective porosity fromseismic attributes using locally linear model tree algorithmrdquoGeophysical Prospecting vol 63 no 3 pp 680ndash693 2015

[18] P Wang and S Peng ldquoOn a new method of estimating shearwave velocity from conventional well logsrdquo Journal of Pe-troleum Science and Engineering vol 180 pp 105ndash123 2019

[19] F S T Haklidir and M Haklidir ldquoPrediction of reservoirtemperatures using hydrogeochemical data western anatoliageothermal systems (Turkey) a machine learning approachrdquoNatural Resources Research vol 29 no 4 pp 2333ndash23462020

[20] A A Mahmoud S Elkatatny and D Al Shehri ldquoApplicationof machine learning in evaluation of the static youngrsquosmodulus for sandstone formationsrdquo Sustainability vol 12no 5 p 1880 2020

[21] M He H Gu and H Wan ldquoLog interpretation for lithologyand fluid identification using deep neural network combinedwith MAHAKIL in a tight sandstone reservoirrdquo Journal ofPetroleum Science and Engineering vol 194 p 107498 2020

[22] Y Bengio ldquoLearning deep architectures for AIrdquo Foundationsand Trends in Machine Learning vol 2 no 1 pp 1ndash127 2009

[23] G E Hinton S Osindero and Y-W Teh ldquoA fast learningalgorithm for deep belief netsrdquo Neural Computation vol 18no 7 pp 1527ndash1554 2006

[24] L C Mao and J Zhao ldquoA survey on the new generation ofdeep learning in image processingrdquo IEEE Access vol 7pp 172231ndash172263 2019

[25] R Udendhran M Balamurugan A Suresh andR Varatharajan ldquoEnhancing image processing architectureusing deep learning for embedded vision systemsrdquo Micro-processors and Microsystems vol 76 p 103094 2020

[26] R J G Van Sloun R Cohen and Y C Eldar ldquoDeep learningin ultrasound imagingrdquo Proceedings of the IEEE vol 108no 1 pp 11ndash29 2020

[27] J Liu C Wu and J Wang ldquoGated recurrent units basedneural network for time heterogeneous feedback recom-mendationrdquo Information Sciences vol 423 pp 50ndash65 2018

[28] Y Hao Y Sheng and J Wang ldquoVariant gated recurrent unitswith encoders to preprocess packets for payload-aware in-trusion detectionrdquo IEEE Access vol 7 pp 49985ndash49998 2019

[29] F T Dezaki Z B Liao C Luong et al ldquoCardiac phasedetection in echocardiograms with densely gated recurrentneural networks and global extrema lossrdquo IEEE Transactionson Medical Imaging vol 38 no 8 pp 1821ndash1832 2019

[30] ZWang Y DongW Liu and ZMa ldquoA novel fault diagnosisapproach for chillers based on 1-D convolutional neuralnetwork and gated recurrent unitrdquo Sensors vol 20 no 9p 2458 2020

12 Mathematical Problems in Engineering

[31] H Colonius ldquoAn invitation to coupling and copulas withapplications to multisensory modelingrdquo Journal of Mathe-matical Psychology vol 74 pp 2ndash10 2016

[32] N Uyttendaele ldquoOn the estimation of nested Archimedeancopulas a theoretical and an experimental comparisonrdquoComputational Statistics vol 33 no 2 pp 1047ndash1070 2018

[33] P Vincent-Lamarre M Calderini and J-P )iviergeldquoLearning long temporal sequences in spiking networks bymultiplexing neural oscillationsrdquo Frontiers in ComputationalNeuroscience vol 14 p 78 2020

[34] W Huang Y Li and Y Huang ldquoDeep Hybrid NeuralNetwork and Improved Differential Neuroevolution forChaotic Time Series Predictionrdquo IEEE Access vol 8pp 159552ndash159565 2020

[35] Y Wu M Yuan S Dong L Lin and Y Liu ldquoRemaininguseful life estimation of engineered systems using vanillaLSTM neural networksrdquo Neurocomputing vol 275 pp 167ndash179 2018

[36] H Yan Y Qin S Xiang Y Wang and H Chen ldquoLong-termgear life prediction based on ordered neurons LSTM neuralnetworksrdquo Measurement vol 165 p 108205 2020

[37] X Shi J Wang G Liu L Yang X Ge and S Jiang ldquoAp-plication of extreme learning machine and neural networks intotal organic carbon content prediction in organic shale withwire line logsrdquo Journal of Natural Gas Science and Engi-neering vol 33 pp 687ndash702 2016

Mathematical Problems in Engineering 13

Page 9: Research Article On a Deep Learning Method of Estimating ...theporosityandcoreanalysisdata,andtheresultsaregreatly affected by human factors. Nuclear magnetic resonance (NMR) porosity

In the study of machine learning robustness is a keycharacteristic Considering that the method of selectingtraining and testing data sets avoids the impact on the ro-bustness of GRU we randomly selected ten training andtesting set from 5165 data points in well E1 )e randomlyselected sets are shown in Figure 9(a) where the colorrepresents the corresponding depth of the sample For eachof these ten cases GRU can be modeled and its RMSE of thetraining set and the testing set can be calculated respectivelyand the results are shown in Figure 9(b)

As shown in Figure 9(b) the difference of trainingsamples will lead to inconsistent prediction errors of GRU)erefore it can be concluded that the selection of trainingsamples will bring about changes in GRU prediction errorsIn order to further analyze the influence of different trainingsamples of GRU forecasting error we use statisticalmethods )e above ten cases of RMSE data have carried onthe single factor analysis of variance (the confidence level of5) and the analysis results are shown in Table 2P 0569gt 005 indicating that there are no significantdifferences between RMSE data of 10 cases )roughcomparative analysis we can safely conclude that differenttraining samples had differences in the prediction error ofGRU but there is no significant difference in statistics Itmeans from a statistical point of view the selection methodof training and testing samples had little influence on therobustness of GRU)erefore this study divides the trainingsample set and the test set according to the depth orderwhich conforms to the statistical law and is feasible andpractical

As we all know determining the parameters of the neuralnetwork model is the premise of the successful constructionof the network model In this study adaptive learning rateoptimization algorithm (Adam algorithm) is used to opti-mize the network Adam algorithm combines the advantagesof RMSProp algorithm and AdaGrad algorithm and can

design the independent adaptive learning rate for differentparameters According to the test the best learning rate is0005 the batch is 10 and the time step is 50 According tothe previous studies [18 37] on neural networks the numberof hidden layers and hidden layer nodes has great influenceon the prediction performance of neural networks Fordifferent research fields the number of hidden layers andhidden layer nodes are different Choosing the optimalnumber of hidden layers and hidden layer nodes is the key toensure the prediction accuracy of neural network model)erefore based on the ergodic optimization thinking thisstudy sets the hidden layer value range as [1 10] and thehidden layer node value range as [1 100] By comparing theroot mean square error (RMSE) of the model under differenthidden layers and hidden layer nodes the hidden layer valueand hidden layer node value of the model under the min-imum root mean square error (RMSE) are obtained )eoptimization test results are shown in Figure 10 It can beseen from the figure that too many or too few hidden layersand neurons in the network will lead to drastic changes inroot mean square error (RMSE) of prediction resultsresulting in the decrease of prediction accuracy )roughtraversal optimization the optimal number of hidden layersand nodes for this study is 3 and 41 respectively

According to the above correlation analysis results fourlogging parameters DTC DEN CNL and GR which havestrong correlation with porosity are selected as themodeling

Table 2 )e results of one-way ANOVA

Sum ofsquares df Mean

square F P value

Betweengroups 0077 9 0009 0883 0569

Within groups 0097 10 0010Total 0174 19

3340

3360

3380

3400

3420

3440

Dep

th (m

)

Case

1

Case

3

Case

4

Case

5

Case

6

Case

7

Case

8

Case

9

Case

10

Case

2

Random sets

500045004000350030002500200015001000

500Se

eds

(a)

002040608

112141618

2

RMSE

Case

2

Case

3

Case

4

Case

5

Case

6

Case

7

Case

8

Case

9

Case

10

Case

1

Random sets

TrainingTesting

(b)

Figure 9 Results of ELM with randomly selected training set (a) Ten cases of randomly selected sets (b) )e RMSE of training and testingfor each case

Mathematical Problems in Engineering 9

independent variables of the porosity prediction modelFigure 11 shows the comparison between the actual andpredicted porosity of RNN GRU California-GRU and

MLR models with the depth of E1 well as well as the localenlargement operation It shows that the results of MLRmodels are quite different from the measured porosity while

2

18

16

14

12

2181614121

RMSE

108

64

2

Number of hidden layers Number of hidden layer nodes20

40 6080

100

Figure 10 Model performance under different hidden layers and node numbers

Porosity ()0 5 10 15 20 25

3330

3360

3390

3420

3450

Dep

th (m

)

3370

3375

3380

3385

3390

MeasuredPredicted

0 5 10

(a)

Porosity ()0 5 10 15 20 25

3330

3360

3390

3420

3450

Dep

th (m

)3370

3375

3380

3385

3390

MeasuredPredicted

0 5 10

(b)

Porosity ()0 5 10 15 20 25

3330

3360

3390

3420

3450

Dep

th (m

)

3370

3375

3380

3385

3390

MeasuredPredicted

0 5 10

(c)

Porosity ()0 5 10 15 20 25

3330

3360

3390

3420

3450

Dep

th (m

)

3370

3375

3380

3385

3390

MeasuredPredicted

0 5 10

(d)

Figure 11 A comparison between measured and predicted porosity with RNN GRU CA-GRU and MLR models respectively in well E1(a) A comparison between measured and RNN predicted porosity (b) A comparison between measured and GRU predicted porosity (c) Acomparison between measured and CA-GRU predicted porosity (d) A comparison between measured and MLR predicted porosity

10 Mathematical Problems in Engineering

the results of RNN GRU and California-GRU models arevery consistent with the actual situation

Eventually the statistical indicators mentioned previ-ously were also employed to carry out this comparison andoutcomes of this comparison were presented in Table 3What can be shown in Table 3 was that the three kinds ofporosity prediction models based on deep learning methodwere far superior to those established with the traditionalmultiple linear regression (MLR) method in their predictionaccuracy And by comparing the prediction precision of thethree types deep learning models (RNN GRU and CA-GRU) established in this study it can be evidently shownthat the GRU and CA-GRU models were superior to RNNmodel in prediction precision among which CA-GRU tookthe best precision due to the highest R of 09423 and VAF of887578 and the lowestMAE of 02101 and RMSE of 11412)rough the comparison between the prediction precision ofCA-GRU and GRU there was just slight discrepancy be-tween their precision

To conclude both CA-GRU and GRU models couldprovide a successful porosity prediction performance )eCA-GRU model showed a higher precision in predictingporosity and from Table 3 which we can conclude that CA-GRU modeled in this study took a higher efficiency inpredicting the porosity with its high precision And theverification results in well E1 prove that the CA-GRUmodelwith optimal inputs could be regarded as an efficient tool forpredicting the porosity in particular in the area where thehigh precision porosity data is required

5 Conclusions

Logging parameters reflect the sedimentary characteristics ofacoustic discharge and electric in different geological pe-riods Porosity is the characteristic response of differentformations and has strong time series characteristics Di-rectly predicting porosity by using logging parameters caneffectively reduce the high cost of using special methods suchas rock physical analysis and nuclear magnetic resonancewhich can provide accurate and low-cost decision-makingbasis for the petroleum exploration and development in oil-gas field Deep learning technology can find the nonlinearrelationship between different parameters completely fromthe data which is very suitable for solving the problem ofnonlinear geophysical interpretation It can not only makefull use of the response characteristics of various loggingparameters to different formations at the same time but alsocan get rid of the limitations of linear prediction of tradi-tional empirical formula

Considering the time series characteristics of loggingparameters and porosity parameters a reservoir porosityprediction method based on GRU neural network isproposed in this study By using correlation analysismethod based on Copula function to select sensitivelogging parameters and then uses GRU neural network tobuild prediction model which not only considers theinfluence of strong correlation sample data on the pre-diction of porosity parameters but also takes into accountthe nonlinear mapping relationship between porosityparameters and logging curves as well as the changetrend and correlation of logging information with depth)e correlation measure method based on Copulafunction can optimize the well logging curves which aresensitive to porosity parameters reduce the dimension ofmodel input eliminate the redundancy between vari-ables and improve the overall prediction performance ofthe model )e research results show that the neuralnetwork model of GRU neural network has strong featureextraction ability and can effectively extract deep char-acteristics reflecting porosity parameters from loggingdata Compared with deep learning models such asmultiple linear regression analysis it can predict porosityparameters more accurately and has strong robustnessand anti-interference ability )is study provides a newidea for accurate interpretation of logging data in oil-gasfield exploration and development

Abbreviations

DNN Deep neural networkGRU Gated recurrent unitNMR Nuclear magnetic resonanceCNN Convolutional neural networkRNN Recurrent neural networkSAE Stack autoencoderU UraniumDEN Bulk densityTH )oriumMAE Mean absolute errorRMSE Root mean square errorRNN Recurrent neural networkLSTM Long short-term memoryCA Correlation analysisMLR Multiple linear regressionSP Spontaneous potentialCNL Compensated neutron logDTC Compressional wave slownessRT ResistivityGR Natural gamma-rayK PotassiumR Correlation coefficientVAF Variance accounted forCA-GRU Correlation analysis-gated recurrent unit

Data Availability

)e data used to support the findings of this study areavailable from the corresponding author upon request

Table 3 Comprising CA-GRU model with RNN GRU and MLRmodels by using four performance indicators including correlationcoefficient (R) mean absolute error (MAE) variance accounted for(VAF) as well as root mean square error (RMSE)

Method R MAE VAF() RMSERNN 09034 02728 815798 14952GRU 0928 02303 861089 12643CA-GRU 09423 02101 887578 11412MLR 08489 09035 717449 22589

Mathematical Problems in Engineering 11

Conflicts of Interest

)e authors declare that there are no conflicts of interest

Acknowledgments

)e authors are grateful for the financial support providedby the National Natural Science Foundation of China(41504041) )e authors would also like to thank XiaoyanDeng for assisting in preparation of this manuscript par-ticularly grateful to her for checking grammar style andsyntax in the manuscript

References

[1] S Singh A I Kanli and S Sevgen ldquoA general approach forporosity estimation using artificial neural network method acase study from Kansas gas fieldrdquo Studia Geophysica etGeodaetica vol 60 no 1 pp 130ndash140 2016

[2] Z Zhong and T R Carr ldquoApplication of a new hybrid particleswarm optimization-mixed kernels function-based supportvector machine model for reservoir porosity prediction Acase study in Jacksonburg-Stringtown oil field West VirginiaUSArdquo Interpretation vol 7 no 1 pp T97ndashT112 2019

[3] A Bakhorji H Mustafa and S Aramco Rock PhysicsModeling and Analysis of Elastic Signatures for Intermediate toLow Porosity Sandstones pp 1ndash5 Society of ExplorationGeophysicists Houston TX USA 2012

[4] S Tao Z Pan S Chen and S Tang ldquoCoal seam porosity andfracture heterogeneity of marcolithotypes in the FanzhuangBlock southern Qinshui Basin Chinardquo Journal of NaturalGas Science and Engineering vol 66 pp 148ndash158 2019

[5] S Tao X Zhao D Tang C Deng Q Meng and Y Cui ldquoAmodel for characterizing the continuous distribution of gasstoring space in low-rank coalsrdquo Fuel vol 233 pp 552ndash5572018

[6] S Tao S Chen D Tang X Zhao H Xu and S Li ldquoMaterialcomposition pore structure and adsorption capacity of low-rank coals around the first coalification jump A case ofeastern Junggar Basin Chinardquo Fuel vol 211 pp 804ndash8152018

[7] C Zhang X Wang and L Q Zhu ldquoEstimation of totalporosity in shale formations from element capture loggingand conventional logging datardquo Arabian Journal of Geo-sciences vol 11 no 11 p 264 2018

[8] A Khaksar and C M Griffiths ldquolowastPorosity form sonic log ingas-bearing shaly sandstones field data versus empiricalequationsrdquo Exploration Geophysics vol 29 no 3-4pp 440ndash446 1998

[9] C Guo C Zhang and L Zhu ldquoPredicting the total porosity ofshale gas reservoirsrdquo Petroleum Science and Technologyvol 35 no 10 pp 1022ndash1031 2017

[10] H L Crow R J Enkin J B Percival and H A J RussellldquoDownhole nuclear magnetic resonance logging in glacio-marine sediments near Ottawa Ontario Canadardquo NearSurface Geophysics vol 18 no 6 p 591 2020

[11] J O Parra C Hackert M Bennett and H A CollierldquoPermeability and porosity images based on NMR sonic andseismic reflectivity application to a carbonate aquiferrdquo CeLeading Edge vol 22 no 11 pp 1102ndash1108 2003

[12] E P Leite and A C Vidal ldquo3D porosity prediction fromseismic inversion and neural networksrdquo Computers amp Geo-sciences vol 37 no 8 pp 1174ndash1180 2011

[13] X Shi G Liu Y Cheng et al ldquoBrittleness index prediction inshale gas reservoirs based on efficient network modelsrdquoJournal of Natural Gas Science and Engineering vol 35pp 673ndash685 2016

[14] P Wang S Peng and T He ldquoA novel approach to totalorganic carbon content prediction in shale gas reservoirs withwell logs data Tonghua Basin Chinardquo Journal of Natural GasScience and Engineering vol 55 pp 1ndash15 2018

[15] P Wang and S Peng ldquoA new scheme to improve the per-formance of artificial intelligence techniques for estimatingtotal organic carbon from well logsrdquo Energies vol 11 no 4p 747 2018

[16] F S Feng P Wang Z Wei et al ldquoA new method for pre-dicting the permeability of sandstone in deep reservoirsrdquoGeofluids vol 2020 Article ID 8844464 16 pages 2020

[17] A A Talkhestani ldquoPrediction of effective porosity fromseismic attributes using locally linear model tree algorithmrdquoGeophysical Prospecting vol 63 no 3 pp 680ndash693 2015

[18] P Wang and S Peng ldquoOn a new method of estimating shearwave velocity from conventional well logsrdquo Journal of Pe-troleum Science and Engineering vol 180 pp 105ndash123 2019

[19] F S T Haklidir and M Haklidir ldquoPrediction of reservoirtemperatures using hydrogeochemical data western anatoliageothermal systems (Turkey) a machine learning approachrdquoNatural Resources Research vol 29 no 4 pp 2333ndash23462020

[20] A A Mahmoud S Elkatatny and D Al Shehri ldquoApplicationof machine learning in evaluation of the static youngrsquosmodulus for sandstone formationsrdquo Sustainability vol 12no 5 p 1880 2020

[21] M He H Gu and H Wan ldquoLog interpretation for lithologyand fluid identification using deep neural network combinedwith MAHAKIL in a tight sandstone reservoirrdquo Journal ofPetroleum Science and Engineering vol 194 p 107498 2020

[22] Y Bengio ldquoLearning deep architectures for AIrdquo Foundationsand Trends in Machine Learning vol 2 no 1 pp 1ndash127 2009

[23] G E Hinton S Osindero and Y-W Teh ldquoA fast learningalgorithm for deep belief netsrdquo Neural Computation vol 18no 7 pp 1527ndash1554 2006

[24] L C Mao and J Zhao ldquoA survey on the new generation ofdeep learning in image processingrdquo IEEE Access vol 7pp 172231ndash172263 2019

[25] R Udendhran M Balamurugan A Suresh andR Varatharajan ldquoEnhancing image processing architectureusing deep learning for embedded vision systemsrdquo Micro-processors and Microsystems vol 76 p 103094 2020

[26] R J G Van Sloun R Cohen and Y C Eldar ldquoDeep learningin ultrasound imagingrdquo Proceedings of the IEEE vol 108no 1 pp 11ndash29 2020

[27] J Liu C Wu and J Wang ldquoGated recurrent units basedneural network for time heterogeneous feedback recom-mendationrdquo Information Sciences vol 423 pp 50ndash65 2018

[28] Y Hao Y Sheng and J Wang ldquoVariant gated recurrent unitswith encoders to preprocess packets for payload-aware in-trusion detectionrdquo IEEE Access vol 7 pp 49985ndash49998 2019

[29] F T Dezaki Z B Liao C Luong et al ldquoCardiac phasedetection in echocardiograms with densely gated recurrentneural networks and global extrema lossrdquo IEEE Transactionson Medical Imaging vol 38 no 8 pp 1821ndash1832 2019

[30] ZWang Y DongW Liu and ZMa ldquoA novel fault diagnosisapproach for chillers based on 1-D convolutional neuralnetwork and gated recurrent unitrdquo Sensors vol 20 no 9p 2458 2020

12 Mathematical Problems in Engineering

[31] H Colonius ldquoAn invitation to coupling and copulas withapplications to multisensory modelingrdquo Journal of Mathe-matical Psychology vol 74 pp 2ndash10 2016

[32] N Uyttendaele ldquoOn the estimation of nested Archimedeancopulas a theoretical and an experimental comparisonrdquoComputational Statistics vol 33 no 2 pp 1047ndash1070 2018

[33] P Vincent-Lamarre M Calderini and J-P )iviergeldquoLearning long temporal sequences in spiking networks bymultiplexing neural oscillationsrdquo Frontiers in ComputationalNeuroscience vol 14 p 78 2020

[34] W Huang Y Li and Y Huang ldquoDeep Hybrid NeuralNetwork and Improved Differential Neuroevolution forChaotic Time Series Predictionrdquo IEEE Access vol 8pp 159552ndash159565 2020

[35] Y Wu M Yuan S Dong L Lin and Y Liu ldquoRemaininguseful life estimation of engineered systems using vanillaLSTM neural networksrdquo Neurocomputing vol 275 pp 167ndash179 2018

[36] H Yan Y Qin S Xiang Y Wang and H Chen ldquoLong-termgear life prediction based on ordered neurons LSTM neuralnetworksrdquo Measurement vol 165 p 108205 2020

[37] X Shi J Wang G Liu L Yang X Ge and S Jiang ldquoAp-plication of extreme learning machine and neural networks intotal organic carbon content prediction in organic shale withwire line logsrdquo Journal of Natural Gas Science and Engi-neering vol 33 pp 687ndash702 2016

Mathematical Problems in Engineering 13

Page 10: Research Article On a Deep Learning Method of Estimating ...theporosityandcoreanalysisdata,andtheresultsaregreatly affected by human factors. Nuclear magnetic resonance (NMR) porosity

independent variables of the porosity prediction modelFigure 11 shows the comparison between the actual andpredicted porosity of RNN GRU California-GRU and

MLR models with the depth of E1 well as well as the localenlargement operation It shows that the results of MLRmodels are quite different from the measured porosity while

2

18

16

14

12

2181614121

RMSE

108

64

2

Number of hidden layers Number of hidden layer nodes20

40 6080

100

Figure 10 Model performance under different hidden layers and node numbers

Porosity ()0 5 10 15 20 25

3330

3360

3390

3420

3450

Dep

th (m

)

3370

3375

3380

3385

3390

MeasuredPredicted

0 5 10

(a)

Porosity ()0 5 10 15 20 25

3330

3360

3390

3420

3450

Dep

th (m

)3370

3375

3380

3385

3390

MeasuredPredicted

0 5 10

(b)

Porosity ()0 5 10 15 20 25

3330

3360

3390

3420

3450

Dep

th (m

)

3370

3375

3380

3385

3390

MeasuredPredicted

0 5 10

(c)

Porosity ()0 5 10 15 20 25

3330

3360

3390

3420

3450

Dep

th (m

)

3370

3375

3380

3385

3390

MeasuredPredicted

0 5 10

(d)

Figure 11 A comparison between measured and predicted porosity with RNN GRU CA-GRU and MLR models respectively in well E1(a) A comparison between measured and RNN predicted porosity (b) A comparison between measured and GRU predicted porosity (c) Acomparison between measured and CA-GRU predicted porosity (d) A comparison between measured and MLR predicted porosity

10 Mathematical Problems in Engineering

the results of RNN GRU and California-GRU models arevery consistent with the actual situation

Eventually the statistical indicators mentioned previ-ously were also employed to carry out this comparison andoutcomes of this comparison were presented in Table 3What can be shown in Table 3 was that the three kinds ofporosity prediction models based on deep learning methodwere far superior to those established with the traditionalmultiple linear regression (MLR) method in their predictionaccuracy And by comparing the prediction precision of thethree types deep learning models (RNN GRU and CA-GRU) established in this study it can be evidently shownthat the GRU and CA-GRU models were superior to RNNmodel in prediction precision among which CA-GRU tookthe best precision due to the highest R of 09423 and VAF of887578 and the lowestMAE of 02101 and RMSE of 11412)rough the comparison between the prediction precision ofCA-GRU and GRU there was just slight discrepancy be-tween their precision

To conclude both CA-GRU and GRU models couldprovide a successful porosity prediction performance )eCA-GRU model showed a higher precision in predictingporosity and from Table 3 which we can conclude that CA-GRU modeled in this study took a higher efficiency inpredicting the porosity with its high precision And theverification results in well E1 prove that the CA-GRUmodelwith optimal inputs could be regarded as an efficient tool forpredicting the porosity in particular in the area where thehigh precision porosity data is required

5 Conclusions

Logging parameters reflect the sedimentary characteristics ofacoustic discharge and electric in different geological pe-riods Porosity is the characteristic response of differentformations and has strong time series characteristics Di-rectly predicting porosity by using logging parameters caneffectively reduce the high cost of using special methods suchas rock physical analysis and nuclear magnetic resonancewhich can provide accurate and low-cost decision-makingbasis for the petroleum exploration and development in oil-gas field Deep learning technology can find the nonlinearrelationship between different parameters completely fromthe data which is very suitable for solving the problem ofnonlinear geophysical interpretation It can not only makefull use of the response characteristics of various loggingparameters to different formations at the same time but alsocan get rid of the limitations of linear prediction of tradi-tional empirical formula

Considering the time series characteristics of loggingparameters and porosity parameters a reservoir porosityprediction method based on GRU neural network isproposed in this study By using correlation analysismethod based on Copula function to select sensitivelogging parameters and then uses GRU neural network tobuild prediction model which not only considers theinfluence of strong correlation sample data on the pre-diction of porosity parameters but also takes into accountthe nonlinear mapping relationship between porosityparameters and logging curves as well as the changetrend and correlation of logging information with depth)e correlation measure method based on Copulafunction can optimize the well logging curves which aresensitive to porosity parameters reduce the dimension ofmodel input eliminate the redundancy between vari-ables and improve the overall prediction performance ofthe model )e research results show that the neuralnetwork model of GRU neural network has strong featureextraction ability and can effectively extract deep char-acteristics reflecting porosity parameters from loggingdata Compared with deep learning models such asmultiple linear regression analysis it can predict porosityparameters more accurately and has strong robustnessand anti-interference ability )is study provides a newidea for accurate interpretation of logging data in oil-gasfield exploration and development

Abbreviations

DNN Deep neural networkGRU Gated recurrent unitNMR Nuclear magnetic resonanceCNN Convolutional neural networkRNN Recurrent neural networkSAE Stack autoencoderU UraniumDEN Bulk densityTH )oriumMAE Mean absolute errorRMSE Root mean square errorRNN Recurrent neural networkLSTM Long short-term memoryCA Correlation analysisMLR Multiple linear regressionSP Spontaneous potentialCNL Compensated neutron logDTC Compressional wave slownessRT ResistivityGR Natural gamma-rayK PotassiumR Correlation coefficientVAF Variance accounted forCA-GRU Correlation analysis-gated recurrent unit

Data Availability

)e data used to support the findings of this study areavailable from the corresponding author upon request

Table 3 Comprising CA-GRU model with RNN GRU and MLRmodels by using four performance indicators including correlationcoefficient (R) mean absolute error (MAE) variance accounted for(VAF) as well as root mean square error (RMSE)

Method R MAE VAF() RMSERNN 09034 02728 815798 14952GRU 0928 02303 861089 12643CA-GRU 09423 02101 887578 11412MLR 08489 09035 717449 22589

Mathematical Problems in Engineering 11

Conflicts of Interest

)e authors declare that there are no conflicts of interest

Acknowledgments

)e authors are grateful for the financial support providedby the National Natural Science Foundation of China(41504041) )e authors would also like to thank XiaoyanDeng for assisting in preparation of this manuscript par-ticularly grateful to her for checking grammar style andsyntax in the manuscript

References

[1] S Singh A I Kanli and S Sevgen ldquoA general approach forporosity estimation using artificial neural network method acase study from Kansas gas fieldrdquo Studia Geophysica etGeodaetica vol 60 no 1 pp 130ndash140 2016

[2] Z Zhong and T R Carr ldquoApplication of a new hybrid particleswarm optimization-mixed kernels function-based supportvector machine model for reservoir porosity prediction Acase study in Jacksonburg-Stringtown oil field West VirginiaUSArdquo Interpretation vol 7 no 1 pp T97ndashT112 2019

[3] A Bakhorji H Mustafa and S Aramco Rock PhysicsModeling and Analysis of Elastic Signatures for Intermediate toLow Porosity Sandstones pp 1ndash5 Society of ExplorationGeophysicists Houston TX USA 2012

[4] S Tao Z Pan S Chen and S Tang ldquoCoal seam porosity andfracture heterogeneity of marcolithotypes in the FanzhuangBlock southern Qinshui Basin Chinardquo Journal of NaturalGas Science and Engineering vol 66 pp 148ndash158 2019

[5] S Tao X Zhao D Tang C Deng Q Meng and Y Cui ldquoAmodel for characterizing the continuous distribution of gasstoring space in low-rank coalsrdquo Fuel vol 233 pp 552ndash5572018

[6] S Tao S Chen D Tang X Zhao H Xu and S Li ldquoMaterialcomposition pore structure and adsorption capacity of low-rank coals around the first coalification jump A case ofeastern Junggar Basin Chinardquo Fuel vol 211 pp 804ndash8152018

[7] C Zhang X Wang and L Q Zhu ldquoEstimation of totalporosity in shale formations from element capture loggingand conventional logging datardquo Arabian Journal of Geo-sciences vol 11 no 11 p 264 2018

[8] A Khaksar and C M Griffiths ldquolowastPorosity form sonic log ingas-bearing shaly sandstones field data versus empiricalequationsrdquo Exploration Geophysics vol 29 no 3-4pp 440ndash446 1998

[9] C Guo C Zhang and L Zhu ldquoPredicting the total porosity ofshale gas reservoirsrdquo Petroleum Science and Technologyvol 35 no 10 pp 1022ndash1031 2017

[10] H L Crow R J Enkin J B Percival and H A J RussellldquoDownhole nuclear magnetic resonance logging in glacio-marine sediments near Ottawa Ontario Canadardquo NearSurface Geophysics vol 18 no 6 p 591 2020

[11] J O Parra C Hackert M Bennett and H A CollierldquoPermeability and porosity images based on NMR sonic andseismic reflectivity application to a carbonate aquiferrdquo CeLeading Edge vol 22 no 11 pp 1102ndash1108 2003

[12] E P Leite and A C Vidal ldquo3D porosity prediction fromseismic inversion and neural networksrdquo Computers amp Geo-sciences vol 37 no 8 pp 1174ndash1180 2011

[13] X Shi G Liu Y Cheng et al ldquoBrittleness index prediction inshale gas reservoirs based on efficient network modelsrdquoJournal of Natural Gas Science and Engineering vol 35pp 673ndash685 2016

[14] P Wang S Peng and T He ldquoA novel approach to totalorganic carbon content prediction in shale gas reservoirs withwell logs data Tonghua Basin Chinardquo Journal of Natural GasScience and Engineering vol 55 pp 1ndash15 2018

[15] P Wang and S Peng ldquoA new scheme to improve the per-formance of artificial intelligence techniques for estimatingtotal organic carbon from well logsrdquo Energies vol 11 no 4p 747 2018

[16] F S Feng P Wang Z Wei et al ldquoA new method for pre-dicting the permeability of sandstone in deep reservoirsrdquoGeofluids vol 2020 Article ID 8844464 16 pages 2020

[17] A A Talkhestani ldquoPrediction of effective porosity fromseismic attributes using locally linear model tree algorithmrdquoGeophysical Prospecting vol 63 no 3 pp 680ndash693 2015

[18] P Wang and S Peng ldquoOn a new method of estimating shearwave velocity from conventional well logsrdquo Journal of Pe-troleum Science and Engineering vol 180 pp 105ndash123 2019

[19] F S T Haklidir and M Haklidir ldquoPrediction of reservoirtemperatures using hydrogeochemical data western anatoliageothermal systems (Turkey) a machine learning approachrdquoNatural Resources Research vol 29 no 4 pp 2333ndash23462020

[20] A A Mahmoud S Elkatatny and D Al Shehri ldquoApplicationof machine learning in evaluation of the static youngrsquosmodulus for sandstone formationsrdquo Sustainability vol 12no 5 p 1880 2020

[21] M He H Gu and H Wan ldquoLog interpretation for lithologyand fluid identification using deep neural network combinedwith MAHAKIL in a tight sandstone reservoirrdquo Journal ofPetroleum Science and Engineering vol 194 p 107498 2020

[22] Y Bengio ldquoLearning deep architectures for AIrdquo Foundationsand Trends in Machine Learning vol 2 no 1 pp 1ndash127 2009

[23] G E Hinton S Osindero and Y-W Teh ldquoA fast learningalgorithm for deep belief netsrdquo Neural Computation vol 18no 7 pp 1527ndash1554 2006

[24] L C Mao and J Zhao ldquoA survey on the new generation ofdeep learning in image processingrdquo IEEE Access vol 7pp 172231ndash172263 2019

[25] R Udendhran M Balamurugan A Suresh andR Varatharajan ldquoEnhancing image processing architectureusing deep learning for embedded vision systemsrdquo Micro-processors and Microsystems vol 76 p 103094 2020

[26] R J G Van Sloun R Cohen and Y C Eldar ldquoDeep learningin ultrasound imagingrdquo Proceedings of the IEEE vol 108no 1 pp 11ndash29 2020

[27] J Liu C Wu and J Wang ldquoGated recurrent units basedneural network for time heterogeneous feedback recom-mendationrdquo Information Sciences vol 423 pp 50ndash65 2018

[28] Y Hao Y Sheng and J Wang ldquoVariant gated recurrent unitswith encoders to preprocess packets for payload-aware in-trusion detectionrdquo IEEE Access vol 7 pp 49985ndash49998 2019

[29] F T Dezaki Z B Liao C Luong et al ldquoCardiac phasedetection in echocardiograms with densely gated recurrentneural networks and global extrema lossrdquo IEEE Transactionson Medical Imaging vol 38 no 8 pp 1821ndash1832 2019

[30] ZWang Y DongW Liu and ZMa ldquoA novel fault diagnosisapproach for chillers based on 1-D convolutional neuralnetwork and gated recurrent unitrdquo Sensors vol 20 no 9p 2458 2020

12 Mathematical Problems in Engineering

[31] H Colonius ldquoAn invitation to coupling and copulas withapplications to multisensory modelingrdquo Journal of Mathe-matical Psychology vol 74 pp 2ndash10 2016

[32] N Uyttendaele ldquoOn the estimation of nested Archimedeancopulas a theoretical and an experimental comparisonrdquoComputational Statistics vol 33 no 2 pp 1047ndash1070 2018

[33] P Vincent-Lamarre M Calderini and J-P )iviergeldquoLearning long temporal sequences in spiking networks bymultiplexing neural oscillationsrdquo Frontiers in ComputationalNeuroscience vol 14 p 78 2020

[34] W Huang Y Li and Y Huang ldquoDeep Hybrid NeuralNetwork and Improved Differential Neuroevolution forChaotic Time Series Predictionrdquo IEEE Access vol 8pp 159552ndash159565 2020

[35] Y Wu M Yuan S Dong L Lin and Y Liu ldquoRemaininguseful life estimation of engineered systems using vanillaLSTM neural networksrdquo Neurocomputing vol 275 pp 167ndash179 2018

[36] H Yan Y Qin S Xiang Y Wang and H Chen ldquoLong-termgear life prediction based on ordered neurons LSTM neuralnetworksrdquo Measurement vol 165 p 108205 2020

[37] X Shi J Wang G Liu L Yang X Ge and S Jiang ldquoAp-plication of extreme learning machine and neural networks intotal organic carbon content prediction in organic shale withwire line logsrdquo Journal of Natural Gas Science and Engi-neering vol 33 pp 687ndash702 2016

Mathematical Problems in Engineering 13

Page 11: Research Article On a Deep Learning Method of Estimating ...theporosityandcoreanalysisdata,andtheresultsaregreatly affected by human factors. Nuclear magnetic resonance (NMR) porosity

the results of RNN GRU and California-GRU models arevery consistent with the actual situation

Eventually the statistical indicators mentioned previ-ously were also employed to carry out this comparison andoutcomes of this comparison were presented in Table 3What can be shown in Table 3 was that the three kinds ofporosity prediction models based on deep learning methodwere far superior to those established with the traditionalmultiple linear regression (MLR) method in their predictionaccuracy And by comparing the prediction precision of thethree types deep learning models (RNN GRU and CA-GRU) established in this study it can be evidently shownthat the GRU and CA-GRU models were superior to RNNmodel in prediction precision among which CA-GRU tookthe best precision due to the highest R of 09423 and VAF of887578 and the lowestMAE of 02101 and RMSE of 11412)rough the comparison between the prediction precision ofCA-GRU and GRU there was just slight discrepancy be-tween their precision

To conclude both CA-GRU and GRU models couldprovide a successful porosity prediction performance )eCA-GRU model showed a higher precision in predictingporosity and from Table 3 which we can conclude that CA-GRU modeled in this study took a higher efficiency inpredicting the porosity with its high precision And theverification results in well E1 prove that the CA-GRUmodelwith optimal inputs could be regarded as an efficient tool forpredicting the porosity in particular in the area where thehigh precision porosity data is required

5 Conclusions

Logging parameters reflect the sedimentary characteristics ofacoustic discharge and electric in different geological pe-riods Porosity is the characteristic response of differentformations and has strong time series characteristics Di-rectly predicting porosity by using logging parameters caneffectively reduce the high cost of using special methods suchas rock physical analysis and nuclear magnetic resonancewhich can provide accurate and low-cost decision-makingbasis for the petroleum exploration and development in oil-gas field Deep learning technology can find the nonlinearrelationship between different parameters completely fromthe data which is very suitable for solving the problem ofnonlinear geophysical interpretation It can not only makefull use of the response characteristics of various loggingparameters to different formations at the same time but alsocan get rid of the limitations of linear prediction of tradi-tional empirical formula

Considering the time series characteristics of loggingparameters and porosity parameters a reservoir porosityprediction method based on GRU neural network isproposed in this study By using correlation analysismethod based on Copula function to select sensitivelogging parameters and then uses GRU neural network tobuild prediction model which not only considers theinfluence of strong correlation sample data on the pre-diction of porosity parameters but also takes into accountthe nonlinear mapping relationship between porosityparameters and logging curves as well as the changetrend and correlation of logging information with depth)e correlation measure method based on Copulafunction can optimize the well logging curves which aresensitive to porosity parameters reduce the dimension ofmodel input eliminate the redundancy between vari-ables and improve the overall prediction performance ofthe model )e research results show that the neuralnetwork model of GRU neural network has strong featureextraction ability and can effectively extract deep char-acteristics reflecting porosity parameters from loggingdata Compared with deep learning models such asmultiple linear regression analysis it can predict porosityparameters more accurately and has strong robustnessand anti-interference ability )is study provides a newidea for accurate interpretation of logging data in oil-gasfield exploration and development

Abbreviations

DNN Deep neural networkGRU Gated recurrent unitNMR Nuclear magnetic resonanceCNN Convolutional neural networkRNN Recurrent neural networkSAE Stack autoencoderU UraniumDEN Bulk densityTH )oriumMAE Mean absolute errorRMSE Root mean square errorRNN Recurrent neural networkLSTM Long short-term memoryCA Correlation analysisMLR Multiple linear regressionSP Spontaneous potentialCNL Compensated neutron logDTC Compressional wave slownessRT ResistivityGR Natural gamma-rayK PotassiumR Correlation coefficientVAF Variance accounted forCA-GRU Correlation analysis-gated recurrent unit

Data Availability

)e data used to support the findings of this study areavailable from the corresponding author upon request

Table 3 Comprising CA-GRU model with RNN GRU and MLRmodels by using four performance indicators including correlationcoefficient (R) mean absolute error (MAE) variance accounted for(VAF) as well as root mean square error (RMSE)

Method R MAE VAF() RMSERNN 09034 02728 815798 14952GRU 0928 02303 861089 12643CA-GRU 09423 02101 887578 11412MLR 08489 09035 717449 22589

Mathematical Problems in Engineering 11

Conflicts of Interest

)e authors declare that there are no conflicts of interest

Acknowledgments

)e authors are grateful for the financial support providedby the National Natural Science Foundation of China(41504041) )e authors would also like to thank XiaoyanDeng for assisting in preparation of this manuscript par-ticularly grateful to her for checking grammar style andsyntax in the manuscript

References

[1] S Singh A I Kanli and S Sevgen ldquoA general approach forporosity estimation using artificial neural network method acase study from Kansas gas fieldrdquo Studia Geophysica etGeodaetica vol 60 no 1 pp 130ndash140 2016

[2] Z Zhong and T R Carr ldquoApplication of a new hybrid particleswarm optimization-mixed kernels function-based supportvector machine model for reservoir porosity prediction Acase study in Jacksonburg-Stringtown oil field West VirginiaUSArdquo Interpretation vol 7 no 1 pp T97ndashT112 2019

[3] A Bakhorji H Mustafa and S Aramco Rock PhysicsModeling and Analysis of Elastic Signatures for Intermediate toLow Porosity Sandstones pp 1ndash5 Society of ExplorationGeophysicists Houston TX USA 2012

[4] S Tao Z Pan S Chen and S Tang ldquoCoal seam porosity andfracture heterogeneity of marcolithotypes in the FanzhuangBlock southern Qinshui Basin Chinardquo Journal of NaturalGas Science and Engineering vol 66 pp 148ndash158 2019

[5] S Tao X Zhao D Tang C Deng Q Meng and Y Cui ldquoAmodel for characterizing the continuous distribution of gasstoring space in low-rank coalsrdquo Fuel vol 233 pp 552ndash5572018

[6] S Tao S Chen D Tang X Zhao H Xu and S Li ldquoMaterialcomposition pore structure and adsorption capacity of low-rank coals around the first coalification jump A case ofeastern Junggar Basin Chinardquo Fuel vol 211 pp 804ndash8152018

[7] C Zhang X Wang and L Q Zhu ldquoEstimation of totalporosity in shale formations from element capture loggingand conventional logging datardquo Arabian Journal of Geo-sciences vol 11 no 11 p 264 2018

[8] A Khaksar and C M Griffiths ldquolowastPorosity form sonic log ingas-bearing shaly sandstones field data versus empiricalequationsrdquo Exploration Geophysics vol 29 no 3-4pp 440ndash446 1998

[9] C Guo C Zhang and L Zhu ldquoPredicting the total porosity ofshale gas reservoirsrdquo Petroleum Science and Technologyvol 35 no 10 pp 1022ndash1031 2017

[10] H L Crow R J Enkin J B Percival and H A J RussellldquoDownhole nuclear magnetic resonance logging in glacio-marine sediments near Ottawa Ontario Canadardquo NearSurface Geophysics vol 18 no 6 p 591 2020

[11] J O Parra C Hackert M Bennett and H A CollierldquoPermeability and porosity images based on NMR sonic andseismic reflectivity application to a carbonate aquiferrdquo CeLeading Edge vol 22 no 11 pp 1102ndash1108 2003

[12] E P Leite and A C Vidal ldquo3D porosity prediction fromseismic inversion and neural networksrdquo Computers amp Geo-sciences vol 37 no 8 pp 1174ndash1180 2011

[13] X Shi G Liu Y Cheng et al ldquoBrittleness index prediction inshale gas reservoirs based on efficient network modelsrdquoJournal of Natural Gas Science and Engineering vol 35pp 673ndash685 2016

[14] P Wang S Peng and T He ldquoA novel approach to totalorganic carbon content prediction in shale gas reservoirs withwell logs data Tonghua Basin Chinardquo Journal of Natural GasScience and Engineering vol 55 pp 1ndash15 2018

[15] P Wang and S Peng ldquoA new scheme to improve the per-formance of artificial intelligence techniques for estimatingtotal organic carbon from well logsrdquo Energies vol 11 no 4p 747 2018

[16] F S Feng P Wang Z Wei et al ldquoA new method for pre-dicting the permeability of sandstone in deep reservoirsrdquoGeofluids vol 2020 Article ID 8844464 16 pages 2020

[17] A A Talkhestani ldquoPrediction of effective porosity fromseismic attributes using locally linear model tree algorithmrdquoGeophysical Prospecting vol 63 no 3 pp 680ndash693 2015

[18] P Wang and S Peng ldquoOn a new method of estimating shearwave velocity from conventional well logsrdquo Journal of Pe-troleum Science and Engineering vol 180 pp 105ndash123 2019

[19] F S T Haklidir and M Haklidir ldquoPrediction of reservoirtemperatures using hydrogeochemical data western anatoliageothermal systems (Turkey) a machine learning approachrdquoNatural Resources Research vol 29 no 4 pp 2333ndash23462020

[20] A A Mahmoud S Elkatatny and D Al Shehri ldquoApplicationof machine learning in evaluation of the static youngrsquosmodulus for sandstone formationsrdquo Sustainability vol 12no 5 p 1880 2020

[21] M He H Gu and H Wan ldquoLog interpretation for lithologyand fluid identification using deep neural network combinedwith MAHAKIL in a tight sandstone reservoirrdquo Journal ofPetroleum Science and Engineering vol 194 p 107498 2020

[22] Y Bengio ldquoLearning deep architectures for AIrdquo Foundationsand Trends in Machine Learning vol 2 no 1 pp 1ndash127 2009

[23] G E Hinton S Osindero and Y-W Teh ldquoA fast learningalgorithm for deep belief netsrdquo Neural Computation vol 18no 7 pp 1527ndash1554 2006

[24] L C Mao and J Zhao ldquoA survey on the new generation ofdeep learning in image processingrdquo IEEE Access vol 7pp 172231ndash172263 2019

[25] R Udendhran M Balamurugan A Suresh andR Varatharajan ldquoEnhancing image processing architectureusing deep learning for embedded vision systemsrdquo Micro-processors and Microsystems vol 76 p 103094 2020

[26] R J G Van Sloun R Cohen and Y C Eldar ldquoDeep learningin ultrasound imagingrdquo Proceedings of the IEEE vol 108no 1 pp 11ndash29 2020

[27] J Liu C Wu and J Wang ldquoGated recurrent units basedneural network for time heterogeneous feedback recom-mendationrdquo Information Sciences vol 423 pp 50ndash65 2018

[28] Y Hao Y Sheng and J Wang ldquoVariant gated recurrent unitswith encoders to preprocess packets for payload-aware in-trusion detectionrdquo IEEE Access vol 7 pp 49985ndash49998 2019

[29] F T Dezaki Z B Liao C Luong et al ldquoCardiac phasedetection in echocardiograms with densely gated recurrentneural networks and global extrema lossrdquo IEEE Transactionson Medical Imaging vol 38 no 8 pp 1821ndash1832 2019

[30] ZWang Y DongW Liu and ZMa ldquoA novel fault diagnosisapproach for chillers based on 1-D convolutional neuralnetwork and gated recurrent unitrdquo Sensors vol 20 no 9p 2458 2020

12 Mathematical Problems in Engineering

[31] H Colonius ldquoAn invitation to coupling and copulas withapplications to multisensory modelingrdquo Journal of Mathe-matical Psychology vol 74 pp 2ndash10 2016

[32] N Uyttendaele ldquoOn the estimation of nested Archimedeancopulas a theoretical and an experimental comparisonrdquoComputational Statistics vol 33 no 2 pp 1047ndash1070 2018

[33] P Vincent-Lamarre M Calderini and J-P )iviergeldquoLearning long temporal sequences in spiking networks bymultiplexing neural oscillationsrdquo Frontiers in ComputationalNeuroscience vol 14 p 78 2020

[34] W Huang Y Li and Y Huang ldquoDeep Hybrid NeuralNetwork and Improved Differential Neuroevolution forChaotic Time Series Predictionrdquo IEEE Access vol 8pp 159552ndash159565 2020

[35] Y Wu M Yuan S Dong L Lin and Y Liu ldquoRemaininguseful life estimation of engineered systems using vanillaLSTM neural networksrdquo Neurocomputing vol 275 pp 167ndash179 2018

[36] H Yan Y Qin S Xiang Y Wang and H Chen ldquoLong-termgear life prediction based on ordered neurons LSTM neuralnetworksrdquo Measurement vol 165 p 108205 2020

[37] X Shi J Wang G Liu L Yang X Ge and S Jiang ldquoAp-plication of extreme learning machine and neural networks intotal organic carbon content prediction in organic shale withwire line logsrdquo Journal of Natural Gas Science and Engi-neering vol 33 pp 687ndash702 2016

Mathematical Problems in Engineering 13

Page 12: Research Article On a Deep Learning Method of Estimating ...theporosityandcoreanalysisdata,andtheresultsaregreatly affected by human factors. Nuclear magnetic resonance (NMR) porosity

Conflicts of Interest

)e authors declare that there are no conflicts of interest

Acknowledgments

)e authors are grateful for the financial support providedby the National Natural Science Foundation of China(41504041) )e authors would also like to thank XiaoyanDeng for assisting in preparation of this manuscript par-ticularly grateful to her for checking grammar style andsyntax in the manuscript

References

[1] S Singh A I Kanli and S Sevgen ldquoA general approach forporosity estimation using artificial neural network method acase study from Kansas gas fieldrdquo Studia Geophysica etGeodaetica vol 60 no 1 pp 130ndash140 2016

[2] Z Zhong and T R Carr ldquoApplication of a new hybrid particleswarm optimization-mixed kernels function-based supportvector machine model for reservoir porosity prediction Acase study in Jacksonburg-Stringtown oil field West VirginiaUSArdquo Interpretation vol 7 no 1 pp T97ndashT112 2019

[3] A Bakhorji H Mustafa and S Aramco Rock PhysicsModeling and Analysis of Elastic Signatures for Intermediate toLow Porosity Sandstones pp 1ndash5 Society of ExplorationGeophysicists Houston TX USA 2012

[4] S Tao Z Pan S Chen and S Tang ldquoCoal seam porosity andfracture heterogeneity of marcolithotypes in the FanzhuangBlock southern Qinshui Basin Chinardquo Journal of NaturalGas Science and Engineering vol 66 pp 148ndash158 2019

[5] S Tao X Zhao D Tang C Deng Q Meng and Y Cui ldquoAmodel for characterizing the continuous distribution of gasstoring space in low-rank coalsrdquo Fuel vol 233 pp 552ndash5572018

[6] S Tao S Chen D Tang X Zhao H Xu and S Li ldquoMaterialcomposition pore structure and adsorption capacity of low-rank coals around the first coalification jump A case ofeastern Junggar Basin Chinardquo Fuel vol 211 pp 804ndash8152018

[7] C Zhang X Wang and L Q Zhu ldquoEstimation of totalporosity in shale formations from element capture loggingand conventional logging datardquo Arabian Journal of Geo-sciences vol 11 no 11 p 264 2018

[8] A Khaksar and C M Griffiths ldquolowastPorosity form sonic log ingas-bearing shaly sandstones field data versus empiricalequationsrdquo Exploration Geophysics vol 29 no 3-4pp 440ndash446 1998

[9] C Guo C Zhang and L Zhu ldquoPredicting the total porosity ofshale gas reservoirsrdquo Petroleum Science and Technologyvol 35 no 10 pp 1022ndash1031 2017

[10] H L Crow R J Enkin J B Percival and H A J RussellldquoDownhole nuclear magnetic resonance logging in glacio-marine sediments near Ottawa Ontario Canadardquo NearSurface Geophysics vol 18 no 6 p 591 2020

[11] J O Parra C Hackert M Bennett and H A CollierldquoPermeability and porosity images based on NMR sonic andseismic reflectivity application to a carbonate aquiferrdquo CeLeading Edge vol 22 no 11 pp 1102ndash1108 2003

[12] E P Leite and A C Vidal ldquo3D porosity prediction fromseismic inversion and neural networksrdquo Computers amp Geo-sciences vol 37 no 8 pp 1174ndash1180 2011

[13] X Shi G Liu Y Cheng et al ldquoBrittleness index prediction inshale gas reservoirs based on efficient network modelsrdquoJournal of Natural Gas Science and Engineering vol 35pp 673ndash685 2016

[14] P Wang S Peng and T He ldquoA novel approach to totalorganic carbon content prediction in shale gas reservoirs withwell logs data Tonghua Basin Chinardquo Journal of Natural GasScience and Engineering vol 55 pp 1ndash15 2018

[15] P Wang and S Peng ldquoA new scheme to improve the per-formance of artificial intelligence techniques for estimatingtotal organic carbon from well logsrdquo Energies vol 11 no 4p 747 2018

[16] F S Feng P Wang Z Wei et al ldquoA new method for pre-dicting the permeability of sandstone in deep reservoirsrdquoGeofluids vol 2020 Article ID 8844464 16 pages 2020

[17] A A Talkhestani ldquoPrediction of effective porosity fromseismic attributes using locally linear model tree algorithmrdquoGeophysical Prospecting vol 63 no 3 pp 680ndash693 2015

[18] P Wang and S Peng ldquoOn a new method of estimating shearwave velocity from conventional well logsrdquo Journal of Pe-troleum Science and Engineering vol 180 pp 105ndash123 2019

[19] F S T Haklidir and M Haklidir ldquoPrediction of reservoirtemperatures using hydrogeochemical data western anatoliageothermal systems (Turkey) a machine learning approachrdquoNatural Resources Research vol 29 no 4 pp 2333ndash23462020

[20] A A Mahmoud S Elkatatny and D Al Shehri ldquoApplicationof machine learning in evaluation of the static youngrsquosmodulus for sandstone formationsrdquo Sustainability vol 12no 5 p 1880 2020

[21] M He H Gu and H Wan ldquoLog interpretation for lithologyand fluid identification using deep neural network combinedwith MAHAKIL in a tight sandstone reservoirrdquo Journal ofPetroleum Science and Engineering vol 194 p 107498 2020

[22] Y Bengio ldquoLearning deep architectures for AIrdquo Foundationsand Trends in Machine Learning vol 2 no 1 pp 1ndash127 2009

[23] G E Hinton S Osindero and Y-W Teh ldquoA fast learningalgorithm for deep belief netsrdquo Neural Computation vol 18no 7 pp 1527ndash1554 2006

[24] L C Mao and J Zhao ldquoA survey on the new generation ofdeep learning in image processingrdquo IEEE Access vol 7pp 172231ndash172263 2019

[25] R Udendhran M Balamurugan A Suresh andR Varatharajan ldquoEnhancing image processing architectureusing deep learning for embedded vision systemsrdquo Micro-processors and Microsystems vol 76 p 103094 2020

[26] R J G Van Sloun R Cohen and Y C Eldar ldquoDeep learningin ultrasound imagingrdquo Proceedings of the IEEE vol 108no 1 pp 11ndash29 2020

[27] J Liu C Wu and J Wang ldquoGated recurrent units basedneural network for time heterogeneous feedback recom-mendationrdquo Information Sciences vol 423 pp 50ndash65 2018

[28] Y Hao Y Sheng and J Wang ldquoVariant gated recurrent unitswith encoders to preprocess packets for payload-aware in-trusion detectionrdquo IEEE Access vol 7 pp 49985ndash49998 2019

[29] F T Dezaki Z B Liao C Luong et al ldquoCardiac phasedetection in echocardiograms with densely gated recurrentneural networks and global extrema lossrdquo IEEE Transactionson Medical Imaging vol 38 no 8 pp 1821ndash1832 2019

[30] ZWang Y DongW Liu and ZMa ldquoA novel fault diagnosisapproach for chillers based on 1-D convolutional neuralnetwork and gated recurrent unitrdquo Sensors vol 20 no 9p 2458 2020

12 Mathematical Problems in Engineering

[31] H Colonius ldquoAn invitation to coupling and copulas withapplications to multisensory modelingrdquo Journal of Mathe-matical Psychology vol 74 pp 2ndash10 2016

[32] N Uyttendaele ldquoOn the estimation of nested Archimedeancopulas a theoretical and an experimental comparisonrdquoComputational Statistics vol 33 no 2 pp 1047ndash1070 2018

[33] P Vincent-Lamarre M Calderini and J-P )iviergeldquoLearning long temporal sequences in spiking networks bymultiplexing neural oscillationsrdquo Frontiers in ComputationalNeuroscience vol 14 p 78 2020

[34] W Huang Y Li and Y Huang ldquoDeep Hybrid NeuralNetwork and Improved Differential Neuroevolution forChaotic Time Series Predictionrdquo IEEE Access vol 8pp 159552ndash159565 2020

[35] Y Wu M Yuan S Dong L Lin and Y Liu ldquoRemaininguseful life estimation of engineered systems using vanillaLSTM neural networksrdquo Neurocomputing vol 275 pp 167ndash179 2018

[36] H Yan Y Qin S Xiang Y Wang and H Chen ldquoLong-termgear life prediction based on ordered neurons LSTM neuralnetworksrdquo Measurement vol 165 p 108205 2020

[37] X Shi J Wang G Liu L Yang X Ge and S Jiang ldquoAp-plication of extreme learning machine and neural networks intotal organic carbon content prediction in organic shale withwire line logsrdquo Journal of Natural Gas Science and Engi-neering vol 33 pp 687ndash702 2016

Mathematical Problems in Engineering 13

Page 13: Research Article On a Deep Learning Method of Estimating ...theporosityandcoreanalysisdata,andtheresultsaregreatly affected by human factors. Nuclear magnetic resonance (NMR) porosity

[31] H Colonius ldquoAn invitation to coupling and copulas withapplications to multisensory modelingrdquo Journal of Mathe-matical Psychology vol 74 pp 2ndash10 2016

[32] N Uyttendaele ldquoOn the estimation of nested Archimedeancopulas a theoretical and an experimental comparisonrdquoComputational Statistics vol 33 no 2 pp 1047ndash1070 2018

[33] P Vincent-Lamarre M Calderini and J-P )iviergeldquoLearning long temporal sequences in spiking networks bymultiplexing neural oscillationsrdquo Frontiers in ComputationalNeuroscience vol 14 p 78 2020

[34] W Huang Y Li and Y Huang ldquoDeep Hybrid NeuralNetwork and Improved Differential Neuroevolution forChaotic Time Series Predictionrdquo IEEE Access vol 8pp 159552ndash159565 2020

[35] Y Wu M Yuan S Dong L Lin and Y Liu ldquoRemaininguseful life estimation of engineered systems using vanillaLSTM neural networksrdquo Neurocomputing vol 275 pp 167ndash179 2018

[36] H Yan Y Qin S Xiang Y Wang and H Chen ldquoLong-termgear life prediction based on ordered neurons LSTM neuralnetworksrdquo Measurement vol 165 p 108205 2020

[37] X Shi J Wang G Liu L Yang X Ge and S Jiang ldquoAp-plication of extreme learning machine and neural networks intotal organic carbon content prediction in organic shale withwire line logsrdquo Journal of Natural Gas Science and Engi-neering vol 33 pp 687ndash702 2016

Mathematical Problems in Engineering 13