Open Access

Response surface method for assessing energy production from geopressured geothermal reservoirs

Geothermal Energy20164:15

DOI: 10.1186/s40517-016-0057-5

Received: 5 August 2016

Accepted: 24 October 2016

Published: 2 November 2016


Developing low-enthalpy geothermal resources along the US Gulf Coast is attractive for reducing global warming and providing clean energy. In this work, synthetic yet representative models for typical geopressured geothermal reservoirs located along the US Gulf Coast are considered. A Box–Behnken experimental design is used to select a small set of these models to perform detailed reservoir simulation runs. Full quadratic linear models are fit to the simulation results, and their sufficiency is confirmed by comparing them to kriging response surfaces. To achieve a higher degree of efficiency in using the response surfaces, Hammersley sequence sampling (HSS) method is used instead of traditional Monte Carlo sampling. HSS ensures that the factor space is sampled more uniformly and the response distribution is converged in less time. By evaluating these proxy models in the sampled factor space, the sensitivity and uncertainty of the response to the factors can be assessed. In this work, the sensitivity and uncertainty of engineered convection is assessed. For quantifying engineered convection, five uncertain reservoir attributes were selected. The response was defined as the net extracted enthalpy. In particular, two different designs for harvesting energy from geothermal reservoirs were compared using the response surfaces. In the modeled systems, results show that the regular design is more effective than the reverse design for extracting energy from geopressured geothermal reservoirs.


Geothermal reservoir Experimental design Response surface Sampling Forced convection Heat extraction


Reducing greenhouse gases and providing the world’s future energy require searching for clean alternative energy resources that can substitute for fossil fuel. Geopressured geothermal reservoirs along the US Gulf Coast are an alternative energy resource which have been considered as marginal and have not been developed extensively. The information available about these resources comes from well test data performed at the time of their development (John et al. 1998). Assessing the uncertainties associated with the commerciality of these reservoirs by simulating and history matching each case independently is an expensive and time-consuming process and should be reserved for the project design stage. One way for quickly assessing these assets is to study the sensitivity of produced energy to uncertain features using reservoir modeling.

Though computational speed and memory for solving problems is improving over time, detailed modeling for history matching of each reservoir or using Monte Carlo simulations is not feasible because running many models are numerically expensive. To overcome this limitation, there are three alternatives as follows: (1) procedures that efficiently create the history matched models; (2) surrogate reduced order models; and (3) statistical proxy models. The first approach uses efficient gradient-based or gradient-free algorithms for history matching production data and efficiently makes field development plans (Shirangi and Durlofsky 2015; Shirangi 2014). The second approach is to build surrogate reduced order models using piecewise linearization algorithms. These algorithms increase the efficiency of the Newton loop by creating the Jacobian matrices around previously simulated points instead of traditionally solving the flow equations (Ansari 2014; He and Durlofsky 2014; Cardoso and Durlofsky 2010). The third approach, which is used in this work, is to run the detailed model using specific combinations of factors sampled by experimental design and then fit a proxy response surface to the factor space (Ansari 2016). Experimental design and response models are popular and used widely (Fisher and Genetiker 1960; Mishra et al. 2015). Schuetter et al. (2014) compare the use of Box–Behnken sampling and quadratic polynomial regression with Latin Hypercube sampling, multivariate adaptive regression spline (MARS), and additivity variance stabilization (AVAS) techniques for geological CO\(_{2}\) sequestration. They conclude that the model developed using Box–Behnken and quadratic polynomials performs the best. Following Schuetter et al. (2014), this work uses Box–Behnken experimental design. Experimental design and response surfaces have also been used in the context of geothermal reservoir engineering (Hoang et al. 2005; Quinao and Zarrouk 2014). Response surface models are fast and have adequate accuracy to represent the detailed model. Response surface models can be efficiently run thousands of times for uncertainty assessments. Traditionally, simple linear models are used to represent the actual model (Montgomery et al. 2012). For most of the cases, quadratic linear models (polynomials) are adequate. Once the proxy response surface model is constructed using a very small, yet statistically representative, set of detailed model runs, it can be used for sensitivity analysis and uncertainty assessments using sampling methods such as Monte Carlo (MC) or Hammersley sequences sampling (HSS).

This work compares different energy extraction designs for geopressured geothermal reservoirs and identifies the better technique. It further quantifies the uncertainty in the selected design using the developed proxy model and sampling methods.

This paper proceeds as follows: we first introduce design of experiments, response surface modeling, and sampling. Then, we apply these methods to compare two different heat extraction designs: regular design and reverse design. We select the best design for extracting energy by comparing their energy output. Regular design shows better performance than reverse design. For the regular design, the uncertainty of the factors is used to obtain the uncertainty in the cumulative energy produced.


Design of experiments (DOE)

Many factors influence the energy recovery and the focus should be on the factors that affect it the most. Experimental design is an efficient method for sampling the factor space and calculating the response with minimum number of runs (Montgomery 2008). Instead of changing the factors one-at-a-time, by which, factor interactions cannot be obtained and a large number of simulation runs is needed, factors are changed systematically in the experimental design to reveal effects and interactions using a smaller set of designed simulation runs. We used Box–Behnken design to fit a full quadratic response surface including all interaction terms to the simulation results (Box et al. 2005). This design needs fewer runs compared with other three-level counterparts (e.g., central composite designs, Fig. 1). Five geologic factors were chosen to test for their effects and uncertain attributes on the response. The reason for selecting these factors is that because of sparse well locations and measurements, these factors cannot be accurately measured and the uncertainty in their values always persists. One reason for selecting the Box–Behnken design is that it requires 41 runs for five factors, compared to 243 runs required for a full three-level factorial and 32 for a full two-level factorial design. We avoid full two-level factorial design because it cannot be used for modeling quadratic effects (second-degree curvature).
Fig. 1

Two types of experimental design with three factors. The Box–Behnken design is used in this work [from Kalla (2005)]

In reservoir engineering, factors can be categorized into three types: controllable, observable, and uncertain. The controllable factors may be engineered or selected, such as well location. The observable factors may be accurately measured such as the reservoir thickness at each well location. However, some uncertain factors, such as porosity far from wells, can neither be measured nor engineered. These factors are the important factors, on which the sensitivity analysis should be based.

Response surface methods

Once the results of runs suggested by the designs are obtained, response surfaces are used to determine the correlation between the factors and the response (Montgomery and Myers 1995). Two widely used formulations for the response surfaces are regression and kriging.


The method of least squares is conventionally used to estimate the regression coefficients and develop response surface models (Montgomery et al. 2012). The fitted linear or quadratic model to the sparse detailed runs can be used to estimate the effect of each parameter on the objective function (Eq. 1).
$$\begin{aligned} \hat{y} = \beta _0 + \sum _{i=1}^{k}\beta _i x_i + \sum _{i=1}^{k}\sum _{j>i}^{}\beta _{ij}x_i x_j + \sum _{i=1}^{k}\beta _ii x_i^2 \end{aligned}$$
In this equation, \(\hat{y}\) stands for the predicted response, x stands for the variable factor of interest, and the \(\beta\) values are the regression coefficients. Equation 2 represents the least squares method for calculating the coefficient vector \(\varvec{\beta }\).
$$\begin{aligned} \varvec{\beta } = (\mathbf {X}^\mathrm{{T}}\mathbf {X})^{-1}\mathbf {X}^\mathrm{{T}}\mathbf {y} \end{aligned}$$
A large regression coefficient does not imply that the factor is significantly influencing the response and a small regression coefficient does not imply that the factor is not influencing the response. Thus, in order to eliminate the effect of factor units on the regression coefficient (for example using bar instead of pascal for pressure), it is necessary to scale the factors to have a range between \({-}\)1 and 1 and make them scale-invariant. For accurately estimating the response and evaluating nonlinear effects, factor interaction and quadratic effects may also be needed. In these situations, a second-degree or reduced polynomial model can be used. A reduced model is a second-degree polynomial model in which the unimportant factors and interactions are removed. The p value statistical parameter is generally used for selecting the important factors.


An alternative to using polynomials for producing response surfaces is ordinary kriging (Landa and Güyagüler 2003). This method linearly combines weighted observations (Eq. 3) and the weights depend on distances between the target point and the observations (Eq. 5). The distance is calculated in the k-dimensional factor space where k represents the number of factors (Eq. 6). In Eqs. 3, 4, and 5, \(\mathbf R\) represents the correlation function and yields the relation between observations.
$$\begin{aligned}&\hat{y} = \hat{\beta _0} + \mathbf r(x) ^\mathrm{{T}} \mathbf R (\mathbf y -\hat{\beta _0}{} \mathbf 1 ) \end{aligned}$$
$$\begin{aligned}&\hat{\beta _0} = (\mathbf 1 ^\mathrm{{T}} \mathbf R ^{-1} \mathbf 1 )^{-1}{} \mathbf R ^{-1}y \end{aligned}$$
$$\begin{aligned}&\mathbf {r}^\mathrm{{T}}\mathbf {(x)}= \left[ \mathbf {R}(\mathbf x ,\mathbf x _1),\mathbf R (\mathbf x ,\mathbf x _2),\mathbf R (\mathbf x ,\mathbf x _3),...,\mathbf R (\mathbf x ,\mathbf x _n) \right] ^\mathrm{{T}} \end{aligned}$$
The correlation function can be modeled as a Gaussian, an exponential or any other positive definite function. Equation 6 shows a Gaussian representation for \(\mathbf R\).
$$\begin{aligned} \mathbf {R(x_i,x_j)} = \exp \left( -\sum _{m=1}^{k} \theta _m |x_{m_{i}} - x_{m_{j}}|^2\right) \end{aligned}$$
In this equation, \(\theta\) indicates a vector of parameters used to fit the model, n is the number of runs, k is the number of factors, and \(x_{m_i}\) and \(x_{m_j}\) are the mth factor levels of design runs. The distance between the target points and the observations is modeled using semivariogram models in the k-dimensional factor space. Covariance between points depends on the distance between them and decreases as the distance increases. Another useful feature of kriging is that it considers data redundancy and ensures that close points impose appropriate effect in predicting the target. These properties of kriging make it the best linear unbiased (BLUE) estimator for correlated data.


Once the proxy models are constructed, a sampling method is needed to sample the factors and to translate the uncertainty from the input to the response. For doing this, a Monte Carlo or quasi Monte Carlo method, such as HSS, is generally used (Kroese et al. 2011).

Unlike straight Monte Carlo which samples n-dimensional space randomly, Hammersley sequence fills the space more uniformly (Fig. 2). This characteristic is known as low-discrepancy sequence sampling. In Hammersley sequences, the design point p (which is less than the total dimension n) is conditioned on the previous \(p-1\) points and the total dimension n, thus making the sample points dependent. The points generated in low-discrepancy sampling methods are highly ordered and exhibit much more regularity. The result of sampling using these sequences converges more efficiently than multidimensional Monte Carlo (Kroese et al. 2011). The only drawback of Hammersley sequences is that the number of points should be specified before simulation and if, due to lack of accuracy of the results, the number of points changes, the process needs to be repeated discarding previous results. The Hammersley sequences span the n-dimensional space with a small yet representative sample. The procedure of obtaining a Hammersley sequence is described below.

Any positive integer n can be expressed by a prime base p as follows:
$$\begin{aligned} n = \beta _0 + \beta _1 p + \beta _2 p^2 + \dots + \beta _r p^r \end{aligned},$$
where every \(\beta _i\) is an integer number in the range \([0,p-1]\). Now, a function \(\phi _p\) of n can be defined as follows:
$$\begin{aligned} \phi _p(n)=\frac{\beta _0}{p}+\frac{\beta _1}{p^2}+\frac{\beta _2}{p^3}+\dots +\frac{\beta _r}{p^{r+1}} \end{aligned}$$
Hammersley points in the m-dimensional space can be given by
$$\begin{aligned} \varvec{x_m}(n) = \Big ( \frac{n}{N}, \phi _{R_1}(n), \phi _{R_2}(n), \dots , \phi _{R_{m-1}}(n) \Big )^\mathrm{{T}} \end{aligned},$$
where \(n=1, 2, \dots , N\) and \(\varvec{x_m}\) is the location of the point in the m-dimensional space. N is the total number of Hammersley sample points and \({R_1,R_2,R_{m-1}}\) are \(m-1\) prime numbers for m dimensions.
Fig. 2

Monte Carlo vs. Hammersley method for sampling a two-dimensional factor space. HSS fills the factor space uniformly (from Kalla (2005))

Regular vs. reverse design

A hypothetical yet representative base model was developed from data obtained from the Camerina A sand zone located in the Gueydan Dome area, Vermillion Parish, Louisiana (Fig. 3). The model is a two-dimensional vertical cross section. This model has been proposed by Plaksina et al. (2011) and its characteristics are shown in Table 1. Similar models have been used in waterflooding (Shook et al. 1992) and CO\(_{2}\) flooding studies (Wood et al. 2008). The average temperature of the zone (Fig. 3) is calculated to be 142.5 \(^\circ\)C with a range varying from 128 to 160 \(^\circ\)C from top to bottom of the sand (Gray and Nunn 2010). A corner point grid system with 30 grids along the X-axis and 7 grids along the Z-axis was found sufficient and accurate enough for modeling this vertical cross section. The temperature of each grid block was calculated by setting the temperature gradient along the Z direction to \(18\,^\circ\)C/km and the temperature of topmost grid block to \(135\,^\circ\)C. Viscosity and density of the water depend on temperature and pressure. The geopressure zone extends from 4200 m to between 4650 and 4880 m depth; thus, the depth assigned to the topmost grid block is 4200 m. In the Gueydan Dome area, a shale sequence with a thickness ranging from 365 to 426 m overlies the Camerina A sand and a shale sequence of 150 m underlies it. Hence, the Camerina A structure represents a four-way closure for an area with one side enclosed by a salt dome (Ansari et al. 2014). The model does not consider the salt dome because Gray and Nunn (2010) found that the Gueydan salt dome does not have optimum burial depth to cause an increase in the temperature of the reservoir fluid.
Table 1

Characteristics of the base hypothetical model [after Plaksina et al. (2011)]


Base value


Temperature of top cell



Matrix compressibility

\(2.0\times 10^{-5}\)

\(\mathrm {kPa^{-1}}\)

Dip angle



Reservoir length



Cross-section width



Reservoir thickness








Rock heat capacity

\(2.6\times 10^6\)

\(\mathrm {J/(m^3C)}\)

Rock bulk density


\(\mathrm {g/cm^3}\)

Thermal heat conductivity

172, 800

\(\mathrm {J/(m\,day\,C)}\)

Water thermal expansion

\(8.8\times 10^{-4}\)

\(\mathrm {C^{-1}}\)

Water compressibility

\(4.5\times 10^{-7}\)

\(\mathrm {kPa^{-1}}\)

Water molecular weight


\(\mathrm {kg/gmol}\)

Water molar density

55, 500

\(\mathrm {gmol/m^3}\)

Water density


\(\mathrm {g/cm^3}\)

Factors shown by italic are used in the experimental design

Fig. 3

Vertical cross section of the Gueydan dome (right figure is modified from Robinson (1967) and left figure is from Szalkowski and Hanor (2003). The Gueydan Dome, located in the Vermilion parish, LA, is shown by the red dot. The Gueydan salt dome and the Camerina A sand zone are shown schematically in the right figure

The equilibrium state obtained from natural convection simulations (1000 years of simulation without injection or production) served as the initial condition for the forced convection. For the natural convection period, the temperature of the reservoir boundary is the same as its surrounding cap/base rock. As sediment cools down by cold water injection, it starts to gain conductive heat from the cap/base rock. A model, developed by Vinsome and Westerveld (1980), is used for peripheral boundary heat gain. The model is based on a semi-analytical impermeable heat conduction formulation which adequately describes the boundary condition at the interface. This model ensures adequate accuracy because in practice the thermal conduction coefficients between the reservoir and the cap/base rock are not precisely known.

Figure 4 compares three different boundary conditions for the base case considered in Table 1. The first case assumes that the reservoir is sealed and there is no heat conduction between the reservoir and cap/base rocks. In the second boundary condition, the temperature of cap/base rocks does not change as the reservoir cools down. The third case is the semi-analytical Vinsome and Westerveld's (1980) model. The heat-insulated boundary condition shows much lower produced energy, and the constant temperature boundary condition shows much higher produced energy than that shown by the more realistic boundary condition proposed by Vinsome and Westerveld (1980). Cumulative produced energy from the semi-analytical model is 38% more than the insulated reservoir and 22% less than constant temperature boundary condition after 30 years.

Figure 4 also implies that the value of the natural heat flux from the earth (50 \(\mathrm {mW/m^2}\)), attributed to radioactive decay, has no perceptible influence on the reservoir once exploitation begins. The natural heat flux (i.e., 20,000 W for the considered vertical cross section with a length of 4000 m and a width of 100 m) is negligible compared with the amount of heat gained from the cap/base rock (order of \(10^{11}\) W). Fig. 4 also shows that the rate of recharge from the cap/base rock to the reservoir initially increases and then decreases as the base/cap rock cools down.
Fig. 4

Different thermal boundary conditions

Two designs for extracting energy are considered in this study: regular design and reverse design. Regular design places the production well at the bottom of the reservoir. The cool water is re-injected back into the reservoir at its top. Reverse design produced the hot geofluid from the updip reservoir and injects the cooled geofluid into the deeper sections. In the models, the wells are perforated only at the topmost and the bottommost grid blocks (Fig. 5). In all models, the produced geofluid is injected back into the reservoir. The production and injection rates are set to \(2000\,\mathrm {m^3/day}\). The temperature of the injected water is set to 70 \(^\circ\)C which is typical for low-enthalpy geothermal reservoirs. The models were simulated for 30 years. No salt concentration or chemical reactions are considered for the geofluid. The permeability and porosity are uniform and remain constant during the production period. Five factors are more uncertain than the others which are colored as gray in Table 1. The range of possible change in these factors is summarized in Table 2. This study aims to recognize which heat extraction design is more effective and how the change in these factors affects the produced energy. A three-level Box–Behnken experimental design with 41 runs is used for developing the response surface models. Both polynomial and kriging types of the response surface models are investigated.
Fig. 5

Schematic of the regular design. In the regular design, the cool water is injected at the top and the hot water is produced from the bottom. In the reverse design, the hot water is produced from the top of the reservoir and is injected back at the bottom

Table 2

Levels of factors in the Box–Behnken design [Plaksina et al. (2011)]


Length (m)

Thickness (m)

Dip angle


Permeability (md)



















Results and discussion

The regular and reverse designs are compared for all the combinations of factors using Eq. 10
$$\begin{aligned} \epsilon _\mathrm{{r}} = \frac{E_\mathrm{{reg}} - E_\mathrm{{rev}}}{E_\mathrm{{rev}}}\times 100 \end{aligned}$$
in which \(\epsilon _\mathrm{{r}}\) shows the percent difference between the energy that can be extracted from the regular and reverse designs.
In Fig. 6, the blue dots show the detailed model runs using the simulator. The contour lines show the response surface fitted to these model runs and projected onto the various subsets of factor space. These plots clearly demonstrate local gradients (i.e., local sensitivity) and average change in the response. The \(\epsilon _r\) increases as the dip angle and length increase. At small dip angles, the effect of length on the \(\epsilon _\mathrm{{r}}\) is small and at large dip angles, the effect of length is great (sharp gradient in Fig. 6a). The \(\epsilon _\mathrm{{r}}\) is more sensitive to the dip angle than thickness (Fig. 6b). The increase in the thickness from 30 to 50 m increases \(\epsilon _\mathrm{{r}}\) 0.2%, while the increase in porosity increases \(\epsilon _\mathrm{{r}}\) 0.05% (Fig. 6c). Permeability has less effect on the \(\epsilon _\mathrm{{r}}\) compared with other factors (Fig. 6d).
Fig. 6

Response surface results for the \(\epsilon _\mathrm{{r}}\) using the polynomial method. The blue dots show the detailed model runs using the simulator. The contour lines show the response surface fitted to these detailed model runs and projected into the various subspaces of factor space. The reservoir’s length and the dip angle have the maximum effect on the \(\epsilon _\mathrm{{r}}\). At high dip angles, the permeability has the least effect on the \(\epsilon _\mathrm{{r}}\)

The kriging response surface (Fig. 7) is comparable to the polynomial response surface (Fig. 6), and there are only subtle differences between them with the biggest difference being the permeability–porosity relationship. \(\epsilon _\mathrm{{r}}\) in all of these figures is positive indicating that the regular design is more effective than the reverse design in the modeled systems. For the modeled systems, the permeability range tested does not favor one energy extraction design over the other.
Fig. 7

Response surface results for the \(\epsilon _\mathrm{{r}}\) using the kriging method. The results of using Kriging method confirm the results of using polynomial response surface (Fig. 6). There are only subtle differences between these results

Regular design

The regular design was modeled in detail. A quadratic linear response surface model was fit to the simulation results. Our experience shows that using a second-degree polynomial instead of a first degree results in a better polynomial fit. The factors and interaction terms with p values less than 0.05 were selected as important (Table 3). Then, each factor was assigned a specific distribution. Both Monte Carlo and Hammersley sequence sampling methods were used to sample these factors’ distribution and translate the uncertainty from the factors to the response (net extracted energy).

The p value for all the factors except the permeability is less than 0.001 (Table 3) which means that all the factors have significant effect on the heat production except permeability. For the range of permeability considered for modeling, knowledge of the permeability map is less important for predicting the thermal energy recovery presumably due to the constant well rates assumption. This makes sense because the pressure of a geopressured geothermal reservoir is very high and this pressure constraint can provide the flow rate constraint imposed on the production well for the modeled range of permeability (Table 2).

A low p value and a positive coefficient for the porosity in Table 3 indicate that an increase in porosity would increase the produced energy. The fundamental idea in geothermal reservoirs is to extract the heat stored in the rock and use the fluid as the conduit. The increase in the fluid content of the system increases produced energy because the thermal capacity of the brine is more than the rock.

The dip angle, length, and thickness of the reservoir affect the heat production because all of these factors increase the temperature of the production well grid block. When the change in the response caused by the level change of one variable is not the same at all levels of another variable, the interaction term between the two variables is nonzero. For example, the impact of the reservoir’s length on the produced energy for zero dip angle differs from when the dip angle is high. Thus, the interaction term between the length and the reservoir’s dip angle is significant. Similarly, when the regular design was being compared with the reverse design (Figs. 6a, 7a), the interaction term between the length and the reservoir’s dip angle was significant.
Table 3

Summary of the second-order linear regression for the regular design: representing the interaction between two factors


\(\beta \,\, \mathrm{value}\)

Standard error

t   value

Pr (\({>}|\)t|)































I (thickness\(^{2}\))





I (length\(^{2}\))





I (porosity\(^{2}\))





I (permeability\(^{2}\))





I (reservoirDip\(^{2}\))





Thickness: length





Thickness: porosity





Thickness: permeability





Thickness: reservoirDip





Length: porosity





Length: permeability





Length: reservoirDip





Permeability: porosity





Porosity: reservoirDip





Permeability: reservoirDip





Estimate stands for the coefficient value of the regression model. Standard error, t value and p value for each coefficient is given. Gray color shows the important terms that should be retained in the reduced model. The p value is used for selecting the important predictors for the reduced model

Fig. 8

Testing the model by plotting observation vs. prediction

Fig. 9

Distribution of the factors

To test the model, the observation is plotted versus the model prediction (Fig. 8). The observation versus the model prediction falls on the 45\(^\circ\) line which means the model is adequate. This model can be used for the Monte Carlo sampling of factor distributions. For quantifying the uncertainty in produced energy, a distribution was assigned to each factor. It is assumed that length, thickness, reservoir dip angle, and porosity, each follow a normal distribution (Fig. 9). A log-normal distribution is assigned to permeability.

Both MC and HSS methods give comparable results for the distribution of the response (Fig. 10). HSS was about 10 times more efficient than the MC simulation (MC took 121.81 s while HS 12.73 s) due to its quasi-random and low-discrepancy property. The mean and median of the energy recovery distribution are around \(8.1\times 10^{15}\,\) J for both. The obtained distribution for the extracted energy has a normal shape. The shape of this distribution makes sense because the response is sensitive to the length, dip, and thickness; all of which have a normal distribution. The extracted energy ranges between 6.8 and \(9.2\times 10^{15}\)  J and the box plot above the distribution indicates that 50% of the distribution lies between 7.85 and \(8.35\times 10^{15}\,\) J.
Fig. 10

Distribution of the extracted energy


Regular design outperforms reverse design for heat production in the modeled systems. This result is confirmed using polynomial and kriging response surfaces. The heat recovery from the regular design improves as the reservoir length, dip angle, or thickness increase. The results indicate that important criteria in evaluating a set of geothermal reservoirs with adequately high temperature is the size of the reservoir. For the reservoir models within the studied ranges, reservoir dip angle is less important that the reservoir size. The proxy models were efficiently used to construct produced energy distribution from the factor distributions. For having even more efficiency, HSS was used. HSS was about 10 times faster than the Monte Carlo simulation. For the considered problem, the produced energy was between \(7\times 10^{15}\) and \(9\times 10^{15}\) J. Future research should focus on testing the uncertainty in structural and isopach maps of a real reservoir model and compare the results with the results published in this work.


Authors' contributions

EA carried out the modeling, coding, and developing the results. EA also drafted the manuscript. The second author, RH supervised the research and guided the interpretation of results. RH also considerably edited and improved the drafts. Both authors read and approved the final manuscript.


The R source codes and the datasets for this study are available on request to the authors. The authors gratefully acknowledge financial support for this work from the US Department of Energy under grant DE-EE0005125. We thank Computer Modeling Group for providing reservoir simulation software. We also thank Christopher D. White and the members of the LSU Geothermal team for their comments, suggestions, and ideas supporting our efforts.

Competing interests

The authors declare that they have no competing interests.

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

Louisiana State University


  1. Ansari E. Development of a surrogate simulator for two-phase subsurface flow simulation using trajectory piecewise linearization. J Pet Explor Prod Technol. 2014;4(3):315–25.View ArticleGoogle Scholar
  2. Ansari E. Mathematical scaling and statistical modeling of geopressured geothermal reservoirs. Baton Rouge: Louisiana State University; 2016.Google Scholar
  3. Ansari E, Hughes R, White CD. Well placement optimization for maximum energy recovery from hot saline aquifers. In: 39th Workshop on Geothermal Reservoir Engineering, SGP-TR-202. Stanford: Stanford University; 2014.
  4. Box GE, Hunter JS, Hunter WG. Statistics for experimenters: design, innovation, and discovery, vol. 2. Hoboken: Wiley Online Library; 2005.Google Scholar
  5. Cardoso M, Durlofsky LJ. Linearized reduced-order models for subsurface flow simulation. J Comput Phys. 2010;229(3):681–700.View ArticleGoogle Scholar
  6. Fisher SRA, Genetiker S. The design of experiments. Edinburgh: Oliver and Boyd; 1960.Google Scholar
  7. Gray T, Nunn J. Geothermal resource assessment of the Gueydan salt dome and the adjacent Southeast Gueydan field. Gulf Coast Assoc Geol Soc Trans. 2010;60:307–23.Google Scholar
  8. He J, Durlofsky LJ. Reduced-order modeling for compositional simulation by use of trajectory piecewise linearization. SPE J. 2014;19(05):858–72.View ArticleGoogle Scholar
  9. Hoang V, Alamsyah O, Roberts J. Darajat geothermal field expansion performance-a probabilistic forecast. In: Proceedings world geothermal congress. 2005. p. 24–29.
  10. John C, Maciasz G, Harder B. Resource description, program history, wells tested, university and company based research, site restoration. Gulf Coast geopressured-geothermal program summary report compilation. Baton Rouge: Tech. Rep., Louisiana State University, Basin Research Institution; 1998.View ArticleGoogle Scholar
  11. Kalla S. Use of orthogonal arrays, quasi-monte carlo sampling, and kriging response models for reservoir simulation with many varying factors. Master’s thesis. Baton Rouge: Louisiana State University ; 2005.
  12. Kroese DP, Taimre T, Botev ZI. Handbook of Monte Carlo methods. Hoboken: Wiley; 2011.View ArticleGoogle Scholar
  13. Landa JL, Güyagüler B. A methodology for history matching and the assessment of uncertainties associated with flow prediction. In: SPE annual technical conference and exhibition, society of petroleum engineers. 2003.
  14. Mishra S, Ganesh PR, Schuetter J, He J, Jin Z, Durlofsky LJ. Developing and validating simplified predictive models for \(co_2\) geologic sequestration. In: SPE annual technical conference and exhibition, society of petroleum engineers. 2015.
  15. Montgomery DC. Design and analysis of experiments. New York: Wiley; 2008.Google Scholar
  16. Montgomery DC, Myers RH. Response surface methodology: process and product optimization using designed experiments. New York: Wiley; 1995.Google Scholar
  17. Montgomery DC, Peck EA, Vining GG. Introduction to linear regression analysis. New York: Wiley; 2012.Google Scholar
  18. Plaksina T, White C, Nunn J, Gray T. Effects of coupled convection and \(CO_2\) injection in stimulation of geopressured geothermal reservoirs. In: 36th Workshop on geothermal reservoir engineering. Stanford: Stanford University; 2011. p. 146–154.
  19. Quinao JJ, Zarrouk SJ. Applications of experimental design and response surface method in probabilistic geothermal resource assessment–preliminary results. In: Proceedings, 39th Workshop on geothermal reservoir engineering. Stanford: Stanford University; 2014.
  20. Robinson E. Acadia and vermilion parishes. Plano: The Pure Oil Company, Geomap Company; 1967.Google Scholar
  21. Schuetter J, Ganesh PR, Mooney D. Building statistical proxy models for co\(_{2}\) geologic sequestration. Energy Procedia. 2014;63:3702–14.View ArticleGoogle Scholar
  22. Shirangi MG. History matching production data and uncertainty assessment with an efficient tsvd parameterization algorithm. J Pet Sci Eng. 2014;113:54–71.View ArticleGoogle Scholar
  23. Shirangi MG, Durlofsky LJ. Closed-loop field development under uncertainty by use of optimization with sample validation. SPE Journal. 2015.
  24. Shook M, Li D, Lake LW. Scaling immiscible flow through permeable media by inspectional analysis. In Situ. 1992;16(4):311.Google Scholar
  25. Szalkowski DS, Hanor JS. Spatial variations in the salinity of produced waters from southwestern louisiana. Gulf Coast Assoc Geol Soc Trans. 2003;53:798–806.Google Scholar
  26. Vinsome P, Westerveld J. A simple method for predicting cap and base rock heat losses in thermal reservoir simulators. J Can Pet Technol. 1980;19(3).
  27. Wood DJ, Lake LW, Johns RT, Nunez V. A screening model for \(CO_2\) flooding and storage in Gulf Coast reservoirs based on dimensionless groups. SPE Reserv Eval Eng. 2008;11(03):513–20.View ArticleGoogle Scholar


© The Author(s) 2016