View Original Article

Operating Model Redesign with “Big Data” to Optimize E&P Economics

October 11, 2016 1:47 AM
Justin Pettit

The following article is part three in a three part series discussing upstream operating models. View part two here.

At $100 oil, the most important cost for many operators was the opportunity cost of foregone barrels – this provoked great interest in basins and plays with higher IPs (e.g. Bakken, Eagle Ford) and operational excellence initiatives to speed well-delivery (e.g. Lean Production Systems, Theory of Constraints, Six Sigma, etc.). Indeed, it seemed that “Jonah” was back! :) But process standardization was as much about speeding well-delivery to minimize opportunity cost, as it was about optimizing single-well economics. But at $40-50 oil, the Permian basin’s stacked plays and lower costs are more attractive than some other plays with higher IPs, and operator economics are shaped by the optimization of well productivity. Operators must find a new balance between standardization and optimization in their operating models.

Geoscience Workflows and Big Data Analytics

One of the defining characteristics of the oil and gas industry, especially in the upstream, is the importance of geologists, geophysicists and engineers. This has been true for so long and to such an extent that leadership and management roles throughout the industry are filled with people who began their careers in technical roles before migrating into management. The growth and evolution of the industry has only increased the importance of technical expertise – big strategic choices and commercial business decisions are inevitably confounded by, and confused with, technical issues.

Geoscience work flows are the key to reduce costs and increase recovery. And they will respond favorably to deployment in more locally-focused applications (i.e. resource type, field, basin, or sub-play) to improve development plans and well design. Asset teams have a large and growing amount of internal data – seismic and well data that is getting more detailed, more accurate, and more timely. Most operators also can gather large amounts of third-party seismic and offset well data – analysis can include all operators in a given sub play, accounting for important differences (e.g. vintage, etc.), to achieve full business potential.

Multi-disciplinary Work Flow for Subsurface Interpretation of Unconventional Resource

Source: IHS Energy

For example, in most plays it is critical to improve the detection, understanding and prediction of fault and fracture networks but it is difficult to visualize their exact location and geometry. Improvements to the visualization of fault networks through higher resolution enables workflows to more reliably delineate faults and predict fluid pathways, as well as to integrate results with other seismic and well data to estimate fault network volumes. ‘Sub-visual’ faults are often incorrectly included in a sub-seismic or ‘unmappable’ category but can be extracted with newer technology, experience, and combinations with other data. Automated fault extraction can be used as a starting point for labor-intensive manual fault mapping (i.e. interpretation and model generation), with information at sub-visual levels.

However, not all attributes produce reliable and meaningful results and so it is necessary to screen multiple algorithms and to compare with other reference data. Locally-based workflows ensure that fault mapping is calibrated and tested with indications from other data, including seismic, well, drilling or production data. This helps to close the gap between seismically identified faults and those identified from well data (e.g. image logs, cores, correlation, well tests, productivity, fluid losses etc.). When combined with fracture flow properties and geomechanical data, well-constrained and spatially exact flow simulations can be used to optimize well productivity and well economics.

Standardization versus Optimization

There are many benefits to standardization and as a former process engineer, generally speaking, the oil and gas industry needs to employ much more standardization, of processes, procurement specifications, etc., in order to reduce costs! However, geoscience work flows, which can reduce costs, increase well productivity, and improve recovery, will respond favorably to calibration in an asset-focused application. Furthermore, there can be situations, such as with drilling and completions in unconventionals, where efficient execution involves adjusting to real-time information to optimize well performance, or to manage (the inevitable) unforeseen complications.

Notwithstanding this new popularity of lean manufacturing and factory-style production, D&C priorities must find a balance between the efficiency gains of standardization versus the productivity gains of well optimization. Unconventionals subsurface risk manifests in tremendous regional variation in well productivity – not simply in terms of the full aerial extent of core acreage, but also in many of the key elements of the geomechanical model, including geology, stress states, fault and fracture networks, pore pressure, etc. Technology will not realize its maximum potential unless deployment is appropriate. Therefore, we must guard against taking centrally-prescribed standardization too far on matters such as downspacing and lateral length, fracking techniques, proppant usage, use and design of multi-stage frac sleeves, composite plugs, open-hole isolation systems, etc.

Business Unit versus Headquarters

There is a natural tension in resolving what capabilities should be at a corporate functional center of excellence versus what should be at a business unit or asset team. For example, (and yes this is from a different industry), generic drug company, Sandoz, made a significant investment in biosimilars, but then had to negotiate with both parent company, Novartis, and its branded pharmaceutical division, for commercialization of the portfolio – how to balance the parent’s operating model with the unique demands of the BU’s low-cost industry.

To better integrate technical capabilities, organizational structures must migrate toward greater adoption of asset-centric (i.e. geography, resource type, basin or play, etc.) cross-functional teams. Production teams will no longer be buried deep within project departments and commercial teams will be less likely to be “orphaned.” The stature and importance of these capabilities can be elevated in the shift toward commercial “business units” with greater visibility and accountability. Furthermore, flatter organizational structures – especially in “line facing” or “front-line” operations – will reduce costs and speed action.

Many leading operators in US unconventionals are independents with a strong basin position who are free to develop their acreage as required. Their positions are large enough to afford economies of scale and they can adapt to local conditions to determine an optimized approach. And they can revise as they learn or as the context changes. Some companies may tap into a global operation for scale or expertise, but significant variation between plays requires operators to also leverage upstream supply chain services companies with a local presence, for capabilities or “relevant scale” in some workflows.

A growing amount of the functional expertise must be housed within asset teams to enable greater focus on unique applications and circumstances. And with vendors increasingly serving as the “arms and legs” for an enterprise, the remaining organization must be both “leaned out” and made much flatter, with fewer layers and greater spans – efforts must be made to eliminate duplicate or overlapping capabilities, extra layers in the organization, and overhead that is no longer affordable. In many cases, extra layers were created to fit outdated thinking about reporting lines, grade levels, and compensation – thinking that has not kept up with the modern age of knowledge workers or demands of our new economic reality.

Conclusions

The utility of subsurface “big data” and analysis is dependent upon integrating that data into the asset team. For example, in unconventionals, large-scale drilling and completions must efficiently adjust to reflect real-time information to optimize well performance and to manage unforeseen complications, which requires integration across land, geosciences, procurement, drilling, completions, and production. Workflows require better communication between the land staff, geologist, geophysicist, reservoir engineer, and drilling and operations staff. Asset teams that are highly integrated will be more likely to: a) use subsurface data across the entire life-cycle of the asset to plan wells, facilities, staffing, and logistics, b) enable real-time well steering, and c) integrate geology and engineering data while planning well completions (e.g. seismically-derived pore pressure prediction).

As the industry continues to evolve to meet challenges brought on in light of our understanding of the global resource base, climate concerns and other environmental drivers, supply and demand factors, and tax and regulatory regimes – to name but a few – so too must our strategic agenda to better integrate and leverage our geosciences, engineering, and other technical capabilities.

Read more insightful analysis from Justin here

Sign up for the BOE Report Daily Digest E-mail Return to Home