When most people think of uncertainty, they become uneasy because, on the surface, answers are not straightforward and clear. This anxiety creates a decision-making process that may not be wholly rational and introduces significant risk to business operations.
However, paradoxically, calculating uncertainty enables people to make certain statements by quantifying doubt in their measurements. This process of uncertainty calculation allows businesses to rationally incorporate doubt over their measurements into their decisions.
Uncertainty is doubt over the result of a measurement taken.
For example, when measuring the temperature outside, a thermometer may read 85 ºF, but has a ±2º associated with that measurement. By including the ± in reporting the value, that uncertainty makes the declared value certain, essentially saying, “I am sure it is between 83-87 ºF outside.”
Uncertainty, however, is not “error.” Error represents the difference between an attribute's actual value and the measurement taken.
It is possible to correct errors by applying calibrations. For example, what if the thermometer reads 3 degrees higher than the real temperature? We can use a correction to the value, so we know the temperature is 80-84 ºF.
Error is broken into two broad categories, systematic and random error.
These are repeatable errors sourced from measurement devices due to the limitations of the instruments or the people reading the measurements. Systematic errors can lead to bias, meaning that the measured values are consistently offset from the true value. While people associate bias with underhandedness, it almost always comes from innocent systematic errors from instruments or procedures, like the thermometer that reads 3 degrees higher than it should.
Errors are not always due to mistakes, but they can be due to random errors. Every measurement has variability from the environment, instrument, or how the reading is taken. Random error can be mitigated by repeating the measurement several times and then using an average value. This value will have an associated uncertainty that quantifies the doubt. Generally, more measurements reduce the uncertainty.
Common sources of errors include:
This can arise from inaccurate instruments (for example, a calculator that rounds the wrong way).
This comes from an uncommon event in the environment influencing a measurement (like a windy day when you do a flyover).
This can occur when different procedures provide different answers (like if someone decides to round up while another rounds down).
This is sourced from carelessness (like mistranscribing a measurement) or limitations of human ability (such as estimation error on a ruler). However, errors that do not have a source (or cannot be accounted for) do factor into uncertainty.
Truly understanding measurement requires applying uncertainty analysis to error and biases to get the complete picture of what you are measuring. It only becomes more challenging to understand how all those measurements and their related uncertainties come together when you start adding more types of measurements, different methodologies, and sources of error.
For companies in the oil and gas industry, this means fully understanding hard-to-measure attributes like methane intensity, vapor pressure, and crude gravity will remain incredibly complex. With more data and different types of detection technology now accessible, a team of data scientists, engineers, and physical scientists is required to disentangle measurement data in insightful and usable ways — especially for upcoming compliance obligations and voluntary reporting methodologies like OGMP 2.0.
Watch this webinar to learn more about how to address measurement uncertainty and how it is critical in understanding emissions not just at small-scale facilities but for basin-wide assets.