Radioactivity Analysis

Gross versus Specific Analyses

 

Some analyses are designed to detect specific radionuclides (specific analyses) while other analyses are designed to measure radiation from a large number of sources (gross analyses).  

 

Specific Analyses
Gamma emitting radionuclides are determined by specific analyses using gamma spectroscopy, for example.  

 

Gross Analyses
Analyses for specific alpha and beta emitting radionuclides, on the other hand, require more difficult and expensive radiochemical analyses.  In environmental monitoring, low cost gross measurements can be substituted for the more expensive specific analyses.  The gross analyses are generally made first to determine the total amount of radioactivity, of a certain type, that is present.  The more expensive specific analyses of beta- and alpha-emitting isotopes are only made if the gross measurements are above background levels.  When gross beta or gross alpha measurements are made, it simply means all beta activity or all alpha activity is measured.  There is no distinction between which beta-emitting or alpha-emitting isotopes are present, just how much beta or alpha activity there is.  Gross measurements are used as a method to screen samples for relative levels of radioactivity. 

 

Uncertainty in Detecting Radioactivity 

All measurements have associated uncertainties.  For radioactivity measurements, the uncertainty arises from variations in detection equipment and analysis procedures, human error, natural background radiation, counting uncertainty, variances in the distribution of the compound targeted for analysis in the media being analyzed, and other sources.  

Counting uncertainty is reported with radioactive analyses.  This uncertainty exists because radioactive atoms disintegrate in a random way.  That is to say, not all of the particles/energy released strike the detector.  This means that if the number of radioactive disintegrations from one sample are counted multiple times, each for the same duration, that number will vary around some average value.  Background radiation makes this true even for a sample that has no radioactivity.  If a sample containing no radioactivity was analyzed multiple times, the result should vary around an average of zero.  Therefore, samples with radioactivity levels very close to zero will have results that are negative values approximately 50% of the time.  In order to avoid censoring data, these negative values, rather than “not detectable” or “zero,” are reported for radionuclides of interest.  This provides more information than merely truncating to the detection limits for results near background activities and allows for improved statistical analyses and measures of trends in the data. 

 

Confidence in Detections

There are two main types of errors that may be made when reporting levels of contaminants: 

  • reporting something as not present when it actually is, and;

  • reporting something as present when it actually is not.

It is the goal of the ESER program to minimize the error of saying something is not present when it actually is.  To do this, a two standard deviation (2s) reporting level is used.  In a distribution of analysis results for one sample, the average analysis result, plus or minus (±) two standard deviations (2s) of that average, approximates the 95% confidence interval for that average.  When a sample analysis result is greater than 2s from zero, we have about 95% confidence the value came from a distribution with an average greater than zero.  The uncertainty of measurements in this report are denoted by following the result with a “±” 2s uncertainty term and all results that are greater than 2s from zero are reported in the text.   

By using a 2s value as a reporting level (i.e. reporting results that are greater than two times their uncertainty), we are controlling  the error rate for saying something is not there when it is, to less than 5% (we have 95% confidence the value is greater than zero).  

However, there is a relatively high error rate for false detections (reporting something as present when it actually is not) for results near their 2s uncertainty.  This is because there is variability around zero for samples with no radioactivity which may substantially overlap the variability around the sample result (see figure above).  Calculations made from measurements of uncertainties in current analysis techniques were used to determine the level at which we are 95% certain the sample result is greater than the distribution of values for a sample with no radioactivity.  This level is known as the minimum detectable activity (MDA).  For sample results greater than the MDA, we have 95% confidence the results are not false detections.  The MDA per sample weight or volume is called the minimum detectable concentration (MDC).