More Oil  |  Lower Cost  |  Faster  |  Less Risk




Webinar - Subsurface Optimization: The Next Step in your Digital Journey?

Applying Machine Learning, Reservoir Physics, and Advanced Mathematics to the Optimization of Brownfields. Listen to Deloitte and FOROIL for an in depth discussion on the benefits of extending your digital strategy to the subsurface. Learn how other companies have created immediate and substantial impact without adding cultural, organizational and technical implementation complexities inherent in surface digital solutions.

When aired, the webinar was followed by 473 people, amongst which 56% operators.


Register to listen to the webinar with the presentation





Digital Oil Recovery™ powered by FOROIL to Increase Recovery from Brownfields

First-of-its-kind predictive analysis can determine the optimal future development program from over 15 million potential development plans

FOROIL's solutions are mathematically derived from historic production data, including flow rates, pressure, phase and other diverse measurements; and physical equations.

The mathematics describe the dynamic behavior of the reservoir over time, which supplements existing geologic models to provide more accurate production forecasts and extremely rapid analysis.

Using seven or more years of production data, FOROIL's patented program forecasts and processes 15 million development plans overnight to help identify the optimal future development plan based on client-defined objectives and constraints.

Twenty percent production and reserve increases have been routinely generated with little to no capital expenditures. More aggressive capital programs have generated production and reserve growth greater than 50 percent.

Because FOROIL's development plans are based on the actual response of the field, they are generally less risky, easy to use and run. Therefore the plans can be used as a platform for speed and quality of decision-making across traditional silos from reservoir and wells to surface facilities.

Major, national and independent oil companies have successfully deployed Digital Oil Recovery™ technology.

Considering 65 percent of the world oil production comes from conventional brownfields, the financial implications of this capability are staggering.

FOROIL – DELOITTE Collaboration

FOROIL and Deloitte are willing to help upstream clients dramatically increase production and reserves from brownfields – conventional oilfields with more than seven years' operating history – using historic production data, breakthrough mathematical modeling and high-performance computing.

The collaboration brings together FOROIL's patented, tested production-forecasting and field development engines with Deloitte's broad array of consulting and implementation services to assist oil companies in making the most capital-efficient decisions at the field, portfolio and corporate level.

"This technology could have as much impact on the industry as 3D seismic" said Scott Sanderson, principal, Deloitte Consulting LLP.

"FOROIL's techniques are advanced well beyond typical big data or analytics capabilities. We believe the combination of breakthrough mathematics, reservoir physics, machine learning and massive parallel computing to create predictive and optimized results is unique in the industry"

"Deloitte is constantly expanding its digital ecosystem to create greater value for clients" said Janet Foutty, chairman and CEO, Deloitte Consulting LLP.

"With FOROIL, we are collaborating with an organization that has a breakthrough approach for using cognitive capabilities to design actionable plans that increase oil production from developed fields while using capital efficiently"

"Our technology is based on the premise that past data tells a story of the dynamic behavior of the reservoir," said Hugues de Saint Germain, FOROIL's Founder and Chairman.

"But it is an extraordinarily complex story requiring data, mathematics, reservoir physics and computing power. The industry needed a sort of Rosetta Stone to translate that story into actionable and reliable plans, and that is what we provide in a matter of weeks. As a leader in breakthrough digital technologies, Deloitte understands this, and we are excited about working with them to turn huge existing data sets collected for decades by our industry into digital assets, to create incremental production, reserves and shareholder value for our mutual clients around the world."





Frequently Asked Questions

How can the DOR model complement our existing reservoir models?

DOR technology is fundamentally a tool to find a better way to produce a given field. It is based on learning how the field is behaving. DOR is not designed to tell why the field is behaving like this, but rather how to make the best of it. As a secondary result, it will help you understand your reservoir better, but our primary objective is to generate those 15 million plans and find the best way to produce.

What machine learning techniques are used in behavioral modelling? Are the methods used for geological modelling of reservoir suitable for behavioral modelling? For example, ANN (Artificial Neural Network) methods such as MLP (Multi Layer Perceptron) may be used for estimating reservoir properties can the same tools be used in behavioral modelling?

No, we are not using those techniques. I would say the principle is to apply optimization algorithms, so we solve in sequence two optimization problems. The first one is to find the best candidate model within that space of solutions, which will give you the best history match. There is no difference in principle with what you do in history matching a traditional model, the difference is that we do that in a different space. Once we have found the best model, which incidentally is unique in our case as opposed to a meshed model, then we have a given model which will be used to test, fully calculate and rank 15 million plans and again that is going to be a second optimization problem. So we make sure that we cover the vast domain of possibilities and find the best combination of parameters which in that case are where should you drill, how should you modify the production rates, injection rates, how much EOR should you inject and so forth. Those are the levels that we are playing with and therefore the 15 million scenarios we generate, calculate and rank.

What is meant by measured data? Interpreted data is derived from measured data. Does FOROIL use only production rate and pressure data? You can’t use Darcy law without incorporating G&G.

Thank you for the question. Very important question. We take all the measured data, so I will make a brief list. We take production data and injection data well by well. We take all the pressure measurements made on the field, it can be bottom hole flowing pressure, it can be well head pressure or static pressure, all these measurements made on the field as they are. We will also take all the geological measurements made at the wells, at the cores, so porosity, permeability, everything. We will take all the PVT measurements made on the fluids. So all the measured data will be taken as inputs, similarly to a meshed model. What we will not take as an input are all the interpreted data like seismic data, and like geological model that you build yourself from your interpretation of the field.

How are Field Development Plans generated and provided to the algorithm for optimization?

That is part of the know-how: by solving the non-uniqueness issue we have complexified the optimization problem. Because the space of solution, which I have mentioned a few times, has not a nice shape in terms of mathematical algorithms that we can apply for optimization. So part of the know-how we have developed over time is to make sure that we cover the vast domain of possibilities that you have, in such a way that, having run 15 million scenarios, we are pretty sure not being too far from the absolute optimum. We cannot guarantee that we touch the optimum, because there is no way to do that, but what we are telling is the fact that a) we abundantly sampled the vast domain of possibilities, and b) we have significantly increased the outputs of your field compared to your reference case.

How are economics embedded into the optimization?

Our solution is not a software, it means that for each project we have some developers coding some specific lines for your specific field with your specific issues and needs. And regarding economics, we are taking both all the costs and prices of oil related to your case. So we will code the cost of drilling, we will code the cost of operating, we will code the cost of debottlenecking surface equipment, because surface equipment are part of the optimization process. All of that will be coded. We will embed also the fiscal terms if you wish, and we will also of course take the price of oil as a given, and all these parameters can vary through time also, and in the end we will code your specific objective function, the objective that you want to achieve, whether it be net present value, or CAPEX per barrel, or rate of return of your investments, or a combination of those. Any specific objective that you specify will be optimized.

How reliable is this method where we do not have data related to what is going to be implemented in the future?

This is a limitation to what we do. We have three limitations. I mentioned the first two which are the number of wells and the number of years, and the last one is that we must limit our recommendation to types of actions that have been already implemented on that particular field in the past. So if you had secondary recovery on your field, we will help you optimize that, but if you did not have EOR on your field, we cannot predict what the behavior will be with EOR, unless you have started it already for some time. It is basically a machine learning process, learning the response of any drainage technique on the field. So we need a past to do that.

How are physics included? Do you actually solve the flow equations?

Yes we do and this is made within the space of solutions. So we couple the space of solutions with data and find a best candidate model within that space of solutions.

Do you have successful stories for deep water fields? Compared with unconventional reservoirs, we don't have many wells and much data available for deep water fields.

Yes we have. Having said that, in some cases you do not have a sufficient number of wells, some cases in the Gulf of Mexico do not have enough wells, some cases in the North Sea do not have enough wells. But apart from that if you have the minimum number of wells and minimum of years of production, the technology applies equally well on deep offshore, offshore, or onshore, no difference.

How long will it take for a 10 years old field with 100 wells already unitized with gas and water injection to get a FOROIL report?

How long will it take to do the job? Three months. In three months, we will set up the model, run our 15 million scenarios, find for each type of strategy the best scenario. What we call a type of strategy is for instance no investment, a second one will be 10 million USD investment, another one will be 20 million USD investment, so for each of the strategies defined by the client, we will find the best plan. The entire process for addressing, say 4 different strategies, will take three months. After that we will go into the plan customization phase, which roughly takes a few weeks, during which you will ask us plenty of questions and during which we will finetune the plan to make it more practical, taking on additional constraints that you will have uncovered to us.

Can your method implement optimization of tertiary recovery?

The answer to the question is yes. Again it depends on the number of wells, and the number of years since the field has been under that form of recovery. If you think about it as a learning system it needs a training data set to learn from, and whether it is primary recovery, secondary recovery, or tertiary recovery, the tool can model and learn from the past behavior.

How did you embed reservoir physics into the data?

Reservoir physics are not embedded in the data, they are embedded in the custom space of solutions. So in that space of solutions, we enforce all the physical equations that we all know, the same as one has in any mesh model, and then we use the machine learning to find the best model within that space of candidate models, to find the best history match.

Can you apply uncertainty on your allocated production data to determine the impact on the forecast?

Yes, the answer is absolutely yes. Having said that if you have in mind that a mesh model over five years would have typically plus or minus 50% error or uncertainties attached to each well, our tool typically has uncertainty divided by 10, so plus or minus 5% at a well level, as opposed to plus or minus 50%. So the need for assessing uncertainty is less, but most clients would require us to do that and of course we do that.

What is the minimum number of required parameters you need?

The minimum number is a combination of how many years the field has been producing and the number of wells. The rule of thumb is 10 years, but I would say that 20 years is better than 10, and then the number of wells. So again, 10 to 15 wells would be seen as minimum, but again 30 wells or 50 wells is better than 10 or 15. So it is a combination of number of years and number of wells. Prior to starting a job we will ask some basic information regarding the field in order to test whether your case is in the scope of our technology or not. That will be prior to any job or any contract signed.

What is the margin of uncertainty?

A typical uncertainty is plus or minus 5% on a well-by-well basis over 5 years, as opposed to plus or minus 50% with traditional mesh models. Based on dozens of projects the average error observed between our forecast and actual production is about 4%-7% on a well by well basis and 3%-5% on total cumulative production over a 4 year period.

Do you use a third party or proprietary machine learning algorithm? If using a third party, which one?

No, the machine learning used is proprietary to FOROIL. It was originally developed for the nuclear and defense industries and subsequently modified for solving reservoir engineering challenges.

What is the model based on where production data is used to train it?

The model is based on the same reservoir physics equations used in standard mesh models. The system learns the behavior of the reservoir using the historical production data and the information about previous actions taken in the reservoir. The forecaster tool can then calculate the production and the behavior of the reservoir when similar actions are applied in the future.

What is FDP?

An FDP is a Field Development Plan, which means a detailed definition of all the actions recommended to optimize the production of a field, together with the forecast outcome. These actions typically include adjusting every production and injection rate, converting producers to injectors, de-bottlenecking some surface capacities, and drilling new wells.

If your model can learn on his own and run overnight using HPC, why does it take 3-4 months to run the solution?

Because the Forecaster model is custom built specifically for each individual reservoir it can take 2-3 months to build and test. Once it is built, and the client's objectives and constraints have been defined, millions of potential field development scenarios can be run overnight.

Any other limitations of the data physics model aside the requirement of multi-year/decade production history?

Aside from the appropriate amount of data history, the reservoir needs to have enough wells (at least 15-20 minimum) to provide a rich enough dataset to work with.

Has any study been conducted to compare your forecasts with forecasts based on reservoir simulation?

Deviations in our forecasts have been validated to lie in the range 4%-7% on average on a well by well basis.

How does the models built upon which optimization is done different from a typical reservoir or asset model?

There are numerous differences but the main ones are as follows:

  • The Forecaster model is a behavioral model of the reservoir. It focuses on describing what the reservoir "does" and how it behaves over time as activities are undertaken in the field. It does not attempt to describe what the reservoir "is". The standard simulator model describes in great detail what the reservoir "is" but in doing so becomes much more complex, which impedes its ability to produce accurate forecasts.
  • The forecaster model is built with the appropriate level of complexity relative to the data that is available. This process uses proven theorems of machine learning for the minimization of forecast error.

Any simplifying assumptions in your model? As in PVT, compressibility, relative permeabilities?

The complexity of the space of solutions is adapted to the richness of the data set.

Should we assume that well penetrations and completions in each reservoir zone or compartment, are required to model performance?

Yes, completion profile, intervals and well trajectories are required.

Is the idea to have the tool on the desks of the Reservoir Engineers?

In the typical case, fully optimized field development plans are delivered that contain specific sets of recommendations to be implemented in the field. However, if the client wants the ability to build models, and optimize FDPs on their desktops, technology transfer options are available.

Has this technique been applied to WAG developments?

We have applied the technology to fields where both gas and water were injected with excellent results both in forecast and optimization.

The behavior model depends on what is going on today. But will it change in 5 years as more hydrocarbons have been taken out?

Essential behaviors of the brown field are captured from its production history and are not expected to change much over the next five years: the field after five more years will land quite close to FOROIL's model projection when fed with the operational inputs actually undergone. However, the model is dynamic and parameters can be modified as new wells are drilled and adjustments are made in the field. The optimizer can also be re-run to produce a modified and enhanced Field Development Plan incorporating the new contingencies.

What specific simulator are you using?

The simulator is proprietary to FOROIL and custom built for each reservoir.

Would it be correct to say that 'Digital Oil Recovery' is a workflow rather than a software?

Digital Oil Recovery is not software but it is so much more than a workflow. It uses proprietary software, and patented processes involving advanced mathematics, reservoir physics, and machine learning to produce a highly accurate forecast model and generate the best FDPs out of tens of millions tested.

In case we don't have enough available real data, how reliable would machine learning models based on synthetic simulation models be?

The corresponding data set would be most probably be poor and lack high frequency content. Such an academic exercise would need special care.

Do you have plan to increase the infill drilling as the best option?

Yes we discuss multiple economic scenarios with our clients and if they choose scenarios that involve capex then new drilling locations would likely be recommended as a component of the best plan.

How much will it cost for a FOROIL report on an example of a 100 wells field?

Digital Oil Recovery pricing is based on a small fraction of the value we deliver so it would depend on the specifics of the field in question. ROI for the client is typically in the 2000% - 3000% range.

Can you comment on the challenges around applying this to unconventional reservoirs?

We do not foresee any specific difficulty but it still has not been realized until now.

What machine learning technique is being used?

The machine learning is a custom and patented process that combines advanced mathematics with reservoir and well physics as well as heuristic, deterministic, and non-deterministic approaches for our advanced hybridized optimization process.

Can your method be applied for green field FDP development?

DOR technology does not typically apply to green field development as the process requires 7-10 years minimum of historical data to learn from. In addition it does not attempt to recommend new "step out" drilling locations or activities that have not been previously performed in the field.

I am puzzled, I hear that the work is done by FOROIL rather than the engineers working on the Asset! Forecasting to me means, weekly monthly quarterly and annually! Please clarify how to operationalize the tool?

The forecast model is built by FOROIL's PhD mathematicians, reservoir engineers, data scientists and developers using their patented process and the available measured data over the entire field life. Events at daily or weekly time scale (surveillance of fluid levels, adjustment of pump rates, equipment maintenance and repair, etc.) must be carefully monitored and managed by the Client's operations team. FOROIL's forecast is reviewed with the Client's reservoir engineering team on a monthly to quarterly basis to take into account larger-scale events (e.g. well drilling schedule, changes to surface capacities) directly relevant to the reservoir management and incremental recovery.

Which well data do you need as minimum?

Production and injection volumes by well, by phase, by month. Well test data. Dynamic and static pressure data.

Can this technology be used to optimize the location and number of wells in an infill drilling campaign?

Yes, by identifying the under-drained or under-swept or over-depleted areas of the reservoir, an optimized infill drilling campaign is designed, accounting for interactions between wells and compliant with the current and future capacities of surface facilities.

Do you foresee problems with modeling steam recovery with gravity drainage?

No, we do not foresee any problem applying the technology to Steam Assisted Gravity Drainage (SAGD).

Is your history matching technique based on experimental design?

Our history matching is an iterative process that uses machine learning to scale the model complexity up and down relative to the data available in order to produce a model that has the best history match with the lowest forecast error.