With key guidelines from the original Automotive Industry Action Group reference manual, measurement system analysis has been a significant part of process characterizations and improvements. There has been some scrutiny, especially of summary metrics like gauge repeatability and reproducibility (GR&R) and percent tolerance, as well as what are considered acceptable tolerances. Measurement system assessments may influence decisions that have a significant impact on the lives of patients in industries such as medical devices. As a result, the effect of measurement system analysis on decision making, particularly as it relates to the likelihood of misclassification, is critical.
Assessment of Measurement Precision
The relative spread in calculated values for a common object is referred to as precision. A GR&R is generally used to calculate the accuracy of a measurement device. Gage variance is quantified in GR&R tests, and this variance can be compared to a specific measurement range. The range could include the GR&R study’s variability, the variability of a different population, or a fixed interval. When gauge variance is minimal relative to the measurement range of interest, a gauge is usually considered accurate enough.
A GR&R research is a planned experiment that investigates factors that influence gauge variance using a complete factorial design. In most GR&R studies, two factors are usually considered: 1) a random factor that represents bits, and 2) a factor that affects measurement variance (operator is commonly chosen as the second factor). Additional variables may be used, each of which could reflect a different source of variation. Based on the factorial structure of the controlled experiment, a GR&R analysis will classify contributions to gauge variance in terms of repeatability and reproducibility. When the same component is calculated several times with all other variables held constant, repeatability defines the contribution of a measurement device to gauge variance. The relative contribution of additional factors and interactions between additional factors and parts to gauge variance is defined by reproducibility.
GR&R Metrics Associated with Gage Precision
The relative percentage that gauges error absorbs over a measurement range is a common way to describe gauge efficiency. Using gauge standard deviation (g) from an ANOVA review of a GR&R sample, three different percentages can be easily calculated:
- Percent study variation
- Percent process
- Percent tolerance
Percent R&R indicates how well a measurement device can separate parts from one another across the spectrum of measured variance found during the gauge analysis. Over a historical process operating range, the percent process offers insight into the measurement system’s ability to separate components. If the gauge sample sections are indicative of historic process variation, the percent study variation would be similar to the percent process variation. However, in many situations, the sections selected for the GR&R analysis have a narrower or wider range than the planned or observed process operating range. The percent process can be larger or smaller than the percent sample in these situations. The capacity of a measurement device to differentiate parts from one another across a range of part acceptance specified by specification limits is measured in percent tolerance. The range is determined by twice the distance between the population mean and the specification limit in the case of a one-sided specification.
Gauge precision is determined in each of these cases by comparing gauge variance to a measurement range of interest. The gauge is well suited to distinguish or compare calculated values across the range when the gauge precision error is small relative to the measurement range of interest.
See full story on isixsigma.com