Tuesday, December 27, 2011

Uncertainty approximations

In the field of experimental science, high emphasis is placed on measuring real variables. These variables can then be computed into a certain equation to derive useful results. A famous example of an equation requiring such measured results is Newton's Law of Gravitation. This is what it looks like:

In this case, the useful result is the gravitational force F. To derive it's value, the measurements of the variables of mass and distance are necessary. 

Every measurable variable has a precise real value. In that I mean that every measurable variable can be computed as a number with an infinite decimal expansion. But the problem lies in the fact that measurable variables are not perfectly measurable. This is because the instruments that we use to measure such variables have a certain error, in terms of what they measure. As a matter of fact, the human race has yet to encounter a perfect instrument that measures the value of a variable to its complete extent. This type of error inherent in common instruments is known as uncertainty. 

Uncertainty is basically a quantity, measured in terms of the variable one is trying to quantify, and it provides an idea of how widely distributed the value of the variable could be, about a certain central value. I'll give you an example of how one can quantify a variable, taking into account its uncertainty. Assume you are measuring some distance, and your view of the meter rule (measuring device) looks like this:

The measurement looks like it's 18.50 cm. But you simply cannot be sure of this value. It could possibly be between 18.55 cm and 18.45 cm, and it would still look like 18.50 cm from the view above. 

The value could also possibly be 18.43 cm or 18.58 cm, but then it wouldn't look like it does above. It would look more like 18.40 cm and 18.60 cm respectively.

So a convenient way to express the distance measured is this:


If this distance was used in the formula for Newton's Law of Gravitation, it would lead to different possible values of the force F. This would create uncertainty in F.

Likewise, when measured variables (with their uncertainties) are computed into an equation, they extend their uncertainty to the theoretical (calculated) variable. In that, I mean that they create uncertainty, or possible deviations, in the theoretical variable. This phenomenon is known as uncertainty propagation.

I'm going to illustrate the idea of uncertainty propagation to you for a certain type of formula. Assume such a formula for a theoretical variable Z exists:


A, B and C are measured variables.

A, B and C have some uncertainties to them as expressed below:

As a result of the uncertainties in A, B and C, Z develops an uncertainty too (Since A, B and C create the value of Z). Thus, Z can be expressed as shown:

The theory, at least as far the A level syllabus is concerned, states that the fractional uncertainty in Z can be approximated by the fractional uncertainties in A,B and C and their powers in the following way, given certain conditions:

My aim is to derive this approximation. I assume you have a strong knowledge of the binomial theorem for all real powers and that you're open-minded about approximations, because they might look cheap or desperate in some ways.

To start off, Z could have a maximum value of Z + Delta Z (postitive) due to the combination of uncertainties in the three measured variables.

According to the binomial theorem:

Another approximation:

Another approximation:

This shows how the positive fractional deviation or positive fractional uncertainty in Z can be computed.

Z could also have a minimum value due to the distribution of the values of the measured variables.

According to the binomial theorem:

Another approximation:

Another approximation:

This shows how the negative fractional deviation or the negative fractional uncertainty in Z can be computed. It also shows that both the positive and negative fractional uncertainties in Z are the same. This implies the negative and positive uncertainties in Z are the same. This implies you can know either the positive or negative uncertainties by simply knowing its opposite (negative or positive respectively). This lastly implies that the central values of A, B and C will give you the central value of Z under the stated conditions, since the deviations of Z on both polarities are the same.

I was wondering whether one could extend this type of an approximation to a more general expression of Z, given similar conditions regarding all of the measured variables:

Which can be conveniently defined in the following way:

In other words, I mean can the fractional uncertainty in this Z look like this, as predicted from the above derivation:

Read on to find out. I'll be using the same method of analysis as I did in the derivation earlier on.

According to the binomial theorem:

Another approximation:

Another approximation... This one can be worked out algebraically. However, it is very lengthy and difficult to explain, so I will omit the explanation here. You can try understanding this approximation privately, or you can ask me personally:

Wow! An exact match for the prediction!

But of course, I only calculated the positive fractional uncertainty. You can try working out the negative fractional uncertainty. The modulus of the negative value should be exactly the same as the positive one.

Picture sources:

LaTeX source:

No comments:

Post a Comment