Oil and Gas Equipment Monitoring and Analytics

This article is intended for upstream oil and gas professionals who want to learn about a wide range of Google Cloud Platform (GCP) services. The article outlines how you can enable a system of globally distributed networks to communicate with and monitor on-site equipment. You can monitor equipment health in real time and analyze and predict impending equipment failure. By taking advantage of cloud services, you can schedule planned maintenance instead of resorting to expensive and inefficient emergency repairs that destabilize the entire system.

This article outlines a system to:

  • Use multiple sensors to collect and aggregate data from operational components associated with production systems.
  • Leverage supervisory control and data acquisition (SCADA) networks currently used at many sites, and add an edge gateway that allows access to cloud services.
  • Gain insights into operational health by using cloud services to analyze the collected data in combination with data from related information systems.
  • Optimize production and maintenance by applying machine learning (ML) predictive scenarios.

The problem: equipment breakdown

For an oil well to produce reliably, it must operate continuously, often in remote and harsh environments. Given the mechanical nature of oil well equipment such as compressors, submersible pumps, and pressure valves, breakdowns are common.

Unexpected breakdowns lead to increased downtime and high repair costs. Understandably, oil well operators prize efficient equipment monitoring, diagnosis, and predictive failure analysis. In response, the industry has begun incorporating on-site sensors, internet of things (IoT) technologies, and data analytics into operations.

Sensors and data

Oil and gas equipment sensors generate data continuously. Commonly used sensors monitor temperature, pressure, flow, and air quality. These sensors produce scalar measurements, which are easier to process than video, images, or data from vibration sensors.

The volume of data generated at any site varies drastically based on the number and type of sensors installed, as well as the assets being monitored. To store and retrieve this data, operational databases, referred to as historians, are generally installed within close network proximity of sites. These databases store and retrieve basic sensor data, identify potential issues, and generate alarms and alerts. To provide analytics and workflow capabilities, these databases are often integrated with asset management or business systems.

SCADA to cloud

SCADA is an industry mainstay for connecting a network to equipment and sensors, collecting data, and providing a control plane for managing sensor-generated events. Supervisory systems at the plant or oil well directly control assets and sensors. A key advantage to such a localized approach is low alert response latency, which is ideal for simple rules-based monitoring, alerting, and response triggering. However, to provide these functions across a geographically distributed network of sites, you'll need more sophisticated and elaborate systems.

Gateways

System architectures are evolving to use cloud-based systems for big data functionality, while leveraging SCADA systems for control functionality at the edge. Introducing a gateway device at the boundary of the SCADA system cleanly isolates the local and control network from internet-based cloud communications and functionality.

Such a gateway is a logical component, and can be engineered into existing systems. Ideally, the gateway provides bidirectional communication, and also addresses other key needs:

  • Translating custom protocols used by sensors into protocols better suited for communication with cloud services.
  • Strong security through identity enforcement and encrypted data transmission through use of standards-based mechanisms such as certificates and transport layer security (TLS).
  • Mechanisms such as store-and-forward to accommodate unreliable network connectivity.
  • Bridging functions for commands that might originate from cloud-based services such as remote control systems, ML, and analytics models.
  • Capability to run analytics and models locally for low-latency scenarios.

Gateways must operate reliably and securely in harsh operating environments.

Cloud services

Oil and gas well equipment conditions require a flexible environment for processing, alerting, reporting, analyzing, learning, and predicting. Cloud services provide this essential infrastructure and can handle the following key requirements:

  • Ability to authenticate, manage, and secure communication with gateways in the field.
  • Scalable and distributed infrastructure for streaming and complex event processing as well as batch ingestion and processing of telemetry data.
  • Flexible storage capabilities for different types of data, ranging from structured telemetry to unstructured documents and maintenance histories.
  • Ability to integrate with business systems such as supply chain, workflow, and workforce management.
  • Infrastructure for data analytics, developing ML models from historical data, and applying these models to generate predictive scenarios.

The goal: equipment health management

Predicting equipment health requires that you continually collect and analyze sensor data, and correlate events arising from that data. Enterprises use asset management systems to track inventory, overall health, and the history of all global assets. Using equipment health dashboards and related applications, you can easily track these health metrics.

The first step to gaining insights from large-scale time-based data is to visualize the data in a meaningful way. This means putting the data into the context of the equipment. For example, rotating equipment lends itself to visualizations that plot speed of rotation, duration of rotation, vibration, heat generated, pressure generated, and so on. Static equipment, such as a valve, reports usage, position, flow, pressure, and so on. By aggregating data across plants and geographies, you can monitor the overall health of all connected systems. Over time, visualizations of these types of metrics enable you to spot patterns of change.

To create these types of visualizations, use Google Cloud Data Studio, or try third-party solutions such as Tableau, Qlik, or Looker.

Analysis and insights

After you visualize the data, you can compare it to optimal operating conditions. Examining performance thresholds over time gives you insight into the overall health trends of the asset. Comparing measures across the range of monitored assets helps identify the best-performing equipment. By correlating against operating parameters, you learn which conditions yield the best performance, and you develop an analytical basis for improving the lowest performing assets.

To do these comparative evaluations, use Google Cloud Dataflow or Google BigQuery.

Digital and predictive models

You can use patterns established from analysis to develop digital models that represent physical assets. To improve the fidelity of the models, you aggregate data related to general operation, maintenance, incidents, and problems. Developing these models often depends on using historical data to train ML algorithms. It is important to connect operating conditions to failure incidents so that the algorithms can identify what might have caused each incident. By drawing on a set of observed operating conditions, these trained ML models can then identify and predict when asset performance will be affected. This approach can be valuable in avoiding or reducing the impact of potential outages.

To develop digital and predictive models, use Google TensorFlow and Google Cloud Machine Learning Engine.

Figure 1 outlines the overall architecture, showing the flow of data from on-site sensors to cloud-based services.

Architectural overview Figure 1: Architectural overview

Although other functionalities such as incident alerting and maintenance workflows are important, they fall outside the scope of this article.

The solution: GCP technologies and infrastructure

Google Cloud Platform provides differentiated infrastructural components to develop equipment health monitoring systems.

Cloud IOT Core

Google Cloud IoT Core is a managed infrastructure that enables you to connect, manage, and communicate with IoT devices.

  • Device management provisions and manages field gateways. Using a consistent management interface, you can incorporate a broad set of hardware devices from Google partners.
  • The Protocol Bridge can ingest data using TLS-secured MQTT protocol at scale. You can also use the bridge to manage assets by using it to pass configuration and command messages back to gateways for further processing by SCADA.
  • Telemetry data ingested through Cloud IoT Core is routed to Cloud Pub/Sub for processing.

Cloud IoT Core controls which systems communicate with the cloud and how sensor data from those systems is transmitted to cloud-based systems. You can provision and manage sensors, custom bridges to SCADA systems, and gateways based on geographic or other logical groupings best suited to your organizational structure. You can easily pattern this structure after the hierarchy used in enterprise asset management systems.

Cloud Pub/Sub

Google Cloud Pub/Sub provides a managed, highly scalable messaging infrastructure.

  • A globally available ingestion mechanism defines the ingestion point, which functions with low latency regardless of the origin of telemetry traffic.
  • Automatically route telemetry data into Pub/Sub when ingesting data with the Protocol Bridge in Cloud IoT Core.

Design subscriptions to Pub/Sub topics and process the associated messages to accommodate various telemetry sources. Direct telemetry from rotating equipment to stream processing, while you batch process less latency-sensitive information. Aggregate information is available system-wide because of the low latency associated with Pub/Sub processing.

Cloud Dataflow

Google Cloud Dataflow is a fully managed service for stream and batch data processing pipelines.

  • Incorporate stream processing with telemetry data from Pub/Sub as a source.
  • Combine timestamp-based windowing with watermarks to generate alerts that you can reroute to field assets through Cloud IoT Core.
  • Store raw telemetry and aggregate metrics in appropriate databases.

With Cloud Dataflow, you can design flexible processing pipelines that incorporate, combine, and process data from operational and informational systems, including inspection and maintenance data, to derive comprehensive views of assets and operational health. Use Cloud Dataflow data processing pipelines to leverage built-in functions to process aggregations and deviations from averages, as well as scale based on the volume of telemetry data.

Cloud Bigtable

Google Cloud Bigtable is a high-performance NoSQL service for operational data.

  • Store large volumes of raw telemetry data.
  • Perform time-series analysis and do low-latency writes and reads for use in your application infrastructure.

BigQuery

Google BigQuery is a managed data warehouse for large-scale analytics.

  • Use the Cloud Dataflow processing pipeline to aggregate sensor data and alerts. Then persist that data in BigQuery and partition it by date for ongoing historical reference.
  • Ingest or federate contextual data from business systems such as asset management, maintenance, geography, and weather.
  • Use batch processes in Cloud Dataflow to run large-scale analytics against historical data in combination with contextual information.
  • Visualize the data with reporting tools.

With BigQuery, you can leverage a single, comprehensive data warehouse to derive insights from globally consistent data from multiple exploration, scientific, and field operations teams. Share data produced from different stages and aspects of field development while maintaining control over access rights. BigQuery can serve as your central infrastructure for analytics.

Cloud Datalab

Google Cloud Datalab is an interactive tool for data exploration and analysis.

  • Interactively analyze BigQuery data for patterns. Combine this analysis with contextual information from other sources.
  • Apply ML to build and evaluate models that represent field assets.

Data scientists at onshore locations can work with large datasets and leverage Cloud Datalab to collaborate effectively with offshore or remote location production teams with limited network availability.

Cloud Data Studio

Google Cloud Data Studio is an environment for building interactive dashboards and reports.

  • Aggregate, visualize, and share data from multiple sources.
  • Incorporate comparison functions into charts and graphs to help spot data trends.

With Cloud Data Studio, you can build and distribute dashboards of system-wide operating conditions across your entire organization. Using access controls and drill-down mechanisms, you can ensure that data is available to users based on their role within the organization. You can also use a variety of third-party reporting tools to ensure compatibility with your organizational requirements.

Cloud Machine Learning Engine

Cloud ML Engine is a managed machine engine that enables you to build and serve ML models at scale.

  • The ML engine integrates with BigQuery, Cloud Dataflow, and Google Cloud Storage to provide a diverse set of data sources for training ML models.
  • The scalability of the infrastructure helps you build and tune high-fidelity models even with large volumes of telemetry data.

Use ML to spot patterns in operational data related to assets, maintenance, and contextual information from inspections. You can develop and deploy models within this framework to predict equipment failures and minimize unplanned asset downtime. You can also improve the ML models continuously based on new data.

Next steps

Was this page helpful? Let us know how we did:

Send feedback about...