Issue
Mechanics & Industry
Volume 18, Number 7, 2017
STANKIN: Innovative manufacturing methods, measurements and materials
Article Number 702
Number of page(s) 7
DOI https://doi.org/10.1051/meca/2017054
Published online 30 December 2017

© AFM, EDP Sciences 2017

1 Introduction

In machining industry, one of the main classes of quality characteristics of a piece is its dimensional parameters, such as length, width, diameter, thickness, etc. The requirements to such a characteristic are usually expressed in lower specification limit LSL and upper specification limit USL, or, sometimes, one of them. Dimensional inspection is carried out to make sure that the piece produced has the required dimension. The methods of dimensional inspection can be divided into inspection by variables when the dimension is measured and then compared with the specification limits and inspection by attributes when the dimension is checked to be within the specification limits by using a limit gauge or a pair of gauges (go/no-go).

The development of the quality concept over a few past decades had led to the comprehension that main efforts must be focused on the manufacturing process rather than on the inspection of the ready product [1,2]. A manufacturer has to pay much attention to statistical characteristics of processes. The conformation of a product to a specified tolerance is not enough anymore. Statistical process control [35] becomes one of the most important tools of quality assurance. It is even better to talk about the continuous improvement of processes in manufacturing, engineering, service, or any business management, that is one of the main ideas of the modern quality management [6].

Process characteristics are especially important for a product assembled from many parts [7], and, as it is shown in [8], quality requirements can be accurately predicted in the design stage thanks to a better knowledge of the statistical characteristics of the process.

According to this concept, in the modern contract practices, in many cases supplier of parts have to give statistical proof of quality besides protocols of acceptance inspection, either complete or sampling. As a sufficient proof, the capability indices Cp and/or Cpk [9], calculated for the factual process data, are widely used [10,11].

The capability index Cp includes the variation parameter, or width of random spreading, which, for normal distribution, usually is assumed to be 6σ: (1) where USL and LSL are upper and lower specification limits, respectively. Commonly recommended requirement is Cp ≥ 1.33. In most cases, at least, Cp ≥ 1 must be ensured.

Another capability index Cpk also measures how close to the target is the process: (2)

so, it is a better characteristic of the actual process quality. Statistical process control must ensure convergence of Cp and Cpk values.

To estimate the capability indices, so, we must estimate the process mean and standard deviation. It can be done if a sample of pieces produced in the process is taken and the values of their quality characteristic (process output) are measured. Different estimate procedures can be applied [12].

In general, statistical process control includes the following stages:

  • periodically taking samples of pieces, being produced in the process;

  • statistical tests of the process with the aim to detect shifts of the parameters (mean μ and standard deviation σ);

  • if shifts are not detected, the process continues without changes; if the shift of mean is detected, the process can be improved by an appropriate corrective action; if too large variability is detected, then, as a rule, the process must be stopped until the causes of it are cleared up and eliminated.

Note, that the idea of an observation-based correction is used not only in the process control but in measurements and measuring instruments, too [13]. A technical background of the similar approach is currently being developed for the newest types of manufacturing processes, such as laser cladding [14] .

Different types of control charts are used to monitor the process and detect its shifts from the normal state, from the fundamental and well-known , types, recommended by the international standards [15,16], to complicated ones, such as described, for example, in [1719]. Control charts for variables require much fewer sample volumes and are more accurate than control charts for attributes. However, the quality characteristic must be measured to build and use control charts for variables. The measured values also allow calculating the necessary correction if the shift of mean is detected.

However, it is more economical to gauge pieces into groups than to measure the values of their quality characteristic precisely, and limit gauges are widely used in acceptance inspection, especially when the quality characteristic is a dimensional parameter. It is often quicker, easier and therefore cheaper to gauge an article than it is to measure it exactly. Some schemes of process control on the base of gauging have been developed, starting with [20,21]. For many processes realized by using high-performance equipment, it is practically the only way of monitoring.

While two-limit gauging (go/no-go) the numbers of ‘too small’ and ‘too large’ pieces are counted, and it is an inspection by attributes. The difficulty is that for high-quality processes, the nominal level of nonconformities is only a few ppm, and a sample will contain at least one or two nonconforming pieces either if it is very large (some hundreds or thousands) or when the process is out of the normal state (Cpk < 1 or even Cp < 1) and many nonconforming pieces are already produced.

Some methods based on using of narrow-limit [2124] or step [25,26] gauges were proposed to overcome this difficulty. These methods also use the results of the inspection by attributes, but with narrowed control limits UCL, LCL instead of the originally specified limits: UCL < USL, LCL > LSL. The probability of process output falling outside these control limits is rather high even when the process is in the normal state. The methods allow detecting changes in a mean or a variability before significant production of nonconforming pieces.

Figure 1 reproduces, except conventional designations, the figure from [23], explaining the idea of the method. LSL and USL are lower and upper originally specified limits, respectively, LCL and UCL are narrowed control limits. The target of the process is μ0 = (LSL + USL)/2. The process output is assumed normally distributed, and its probability density includes two parameters: mean μ and standard deviation σ.

When the sample of the volume n is inspected with a pair of limit gauges (go/no-go), two numbers are counted: n1−the number of pieces, having process output value less than the lower control limit, and n3− the number of pieces, having process output value greater than the upper control limit. The number of pieces within the narrowed limits is n2. Certainly, n = n1 + n2 + n3. As one of the numbers n1, n3 increases, or both at once, it is a signal that there are changes in the process, and it may require some corrections.

The methods based on using narrow-limit or step gauges are a good alternative to standard control charts based on variables when the quality characteristic is difficult or expensive to measure precisely but economical to gauging. But is it possible to use the same data to estimate the process parameters and capability indices as well?

In this article, the problem of estimation of capability indices from the results of sample inspection with a pair of limit gauges is solved. The solution is based on some particular methods of statistical estimation developed and researched in [2729].

Those estimates are adequate in the situation of observations grouped in a few scale intervals or discrete of the measuring device. The author named such kind of observations strongly discretized (or ‘highly quantized’), and the obtained estimates Pittman-type estimates.

The essence of the approach is as follows. Let the vector θ represents some measured values, and it belongs to the set Θ. The random variables X1, … , Xn are observed for measuring it. Let the observations are independent, identically distributed, and their probability density depends on θ as a parameter: f(x,θ). The values of the observed variables are registered with a certain discreteness, and because of it the sample includes only a few neighboring values sj, s1 < s2 < … < sm. The value sj repeats nj times, j = 1, … , m, n1 + n2 + … + nm = n. Herewith sj = sj−1 + c, where c is the discrete (scale interval). Observations are called strongly discretized if mis small (24). Such discretion is too rough to reflect the details of distribution, and classical statistics becomes incorrect, particularly, the consistency is lost.

Formally, it is equivalent to grouped samples. Usually grouped observations are treated as if they were non-grouped, giving all the units that fall into a particular group a ‘representative’ value equal either to an endpoint or to the central value of that group's interval. It often leads to considerable systematic errors. A maximum likelihood approach to parameter estimation in such cases was researched in [30], and in [25,26] similar ideas were applied to control charts based on step gauges.

Τhe value sj is such a representative value obtained when the observed variable's value belongs to the interval or semi-interval of the length c which includes sj. Let us denote this semi-interval by Q(sj). Depending on the method of discretization (quantization) of the measuring device, used for observation, it may be (3)

or (4)

The probability to obtain the value sj while observation of the variable Xi is, using the above-made denoting, (5)

Joint probability of the vector of the discretized observations  is (6)

If the weight loss function  is defined, the problem of optimal (in Bayesian sense) estimation of the measured vector θ from strongly discretized observations may be set: (7) where  is the estimate to be found, Eθ is the operator of expected value, taken with the definite value of the parameter θ.

If the loss function is chosen as a square of Euclid's norm: (8)

the solution of the problem (7) (Pittman-type estimate) is (9)

This estimate is consistent [29].

If a parameter vector consists of two components µ and σ, the considerable loss function for its estimation should be chosen as [28] (10)

and, independently on the values of coefficients w1 and w2, Pittman-type estimates take the form (11) (12)

If the standard deviation is unknown, these estimates are better than maximum likelihood estimates [28,29].

thumbnail Fig. 1

The principle of monitoring and control by attributes by narrowed limits. The explanations are in the text.

2 Methods

Using the pair of limit gauges with control limits LCL, UCL classifies the sample of n pieces into three groups: n1 pieces having X < LCL (‘too small’), n2 pieces having LCL ≤ X < UCL (‘good’) and n3 pieces with X ≥ UCL (‘too large’). The situation is very similar to strongly discretized observations, grouped by three discretes with scale interval equal to UCL − LCL. So, we can estimate mean µ and standard deviation σ, as if we have strongly discretized observations.

Assuming process output normally distributed, as it is often done in serial and mass production, we can write the following expressions for the probabilities of the three groups mentioned above:

  •  is a probability of the event “process output is less than the lower control limit”;

  • is a probability of the event “process output is within the control limits”;

  • is a probability of the event “process output is greater than the upper control limit”;

all with the given values of the parameters μ, σ.

The joint probability of the definite values of the numbers n1, n2, n3 is controlled by the polynomial distribution: (13)

Pittman-type estimates calculation also requires the set of integration. We wish to detect rather small deviations from the normal process state, so, we can choose the interval (LCL ; UCL) for μ and (σ0 ; 2σ0) for σ, and integrate on the set Θ = (LCL ; UCL) × (σ0 ; 2σ0).

Finally, Pittman-type estimates are as follows: (14) (15)

These estimates are functions of the results of the inspection, i.e., of the triple of numbers n1, n2, n3. It is easy to find the probability distribution of the estimates under the given values of μ, σ from the expression (11). If H (t1, t2) is a scalar function of two arguments, the distribution function of the random variable is (16) where summarization involves those triplets n1, n2, n3, for which the values of estimates are such that .

3 Results and discussion

The above-described estimation method is developed to increase the precision of measurements in extremely high-precision cases, because it decreases the error caused by measurements discreteness. The example of application of the method to coordinate measurements of intricately shaped surfaces is given in [31].

In [32] the method is applied to the data processing in the laser interferometric system. Table 1, taken from [32], represents some consecutively values observed from the laser interferometer where the reflector is stopped. There is a random spread of the observations, so, the center of the distribution should be estimated to locate the “zero” position more precisely. After moving to another position, we can do the same operations, get the new position of the reflector and subtract the “zero” from it to measure the distance with a sub-micrometric precision.

However, the discrete of reading of the values in Table 1 is rather large in comparison with the random spread width, so, the classical estimates (like the sample average) can not be applied. From the other hand, the discrete is very little (only a part of a nanometer) and it is very difficult to make it less any more. In [32] the Pittman-type estimates for these data are calculated:  = −0.45 · 10−3 µm,  = 0.35 · 10−3 µm. With these estimates, the “zero” may be precisely located, and later the measured position, too.

In almost the same manner, the proposed estimation method allows estimating mean and standard deviation of the manufacturing process in industry.

After inspection of a sample of pieces and counting the quantities n1 of pieces having X < LCL, n2 of pieces having LCL ≤ X < UCL, and n3 of pieces having X ≥ UCL, the estimates of the parameters μ, σ should be calculated according to (14,15). Then we can substitute the estimates to the appropriate formulas and find the estimates of the capability indices: (17) (18)

Of course, in practice, there is no need to evaluate the integrals. Instead of it, the values of estimates should be calculated beforehand and tabulated for all the combinations of n1, n2, n3 forming the given sample volume n = n1 + n2 + n3.

Operating characteristics of hypothesis tests for the indices values based on the found estimates are built according to (16). They depend on both parameters μ, σ. For example, if the nominal for capability index is , the hypothesis against the alternative may be tested with the use of the ratio (19)

 the null hypothesis is rejected when (20) with a certain 0 < λ < 1. The corresponding operating characteristic comes out from (16) with , y = 1 − λ. To obtain the operating characteristic in the common form , we should take into account the dependence of the probability distribution (13) on σ, what is equivalent to the dependence on , and on μ. So, there is a family of functions with different values of μ. Some examples are presented in Figure 2.

For example, suppose that we have a sample of 50 pieces from the process, which has the mean value equal to the nominal: μ = μ0. We complete gauging and count the numbers n1, n2, n3, calculate the estimate of the index , and accept the lot if . Figure 2b shows that with such a procedure we have the probability about 0.95 to accept the lot when the actual is equal to 1, and the probability about 0.1 to accept the lot from the process which has the actual . As the mean shifts by 0.4σ0, the first probability stays almost the same, and the second becomes about 0.19.

Note that if one of the pieces has the quality characteristic value falling far from the majority of the sample, it will have only a limited influence on the values of the estimates. So, the proposed estimates are robust, what is important for statistical process control [33].

Here the estimates of capability indices are inferenced from the process parameters estimates, which are optimal in the sense of the loss function (10). It would be interesting to use, instead of it, a loss function directly referring to the precision of estimation of the appropriate index, i.e., a function that is proportional to (21)

or (22)

Table 1

Observed values in the laser interferometric system.

thumbnail Fig. 2

Some examples of operating characteristics of tests with rejection rule , given as the functions of the ratio . Gauge limits distance is UCL − LCL = 4σ0.

4 Conclusions

The described method of estimation of the capability indices has the following features:

  • it uses the idea of sampling inspection by attributes with narrowed, comparing with initial specifications, control limits, and it allows reducing expenses and has a higher performance in comparison with inspection and control by variables;

  • in the same time, it allows to obtain numerical estimates of capability indices that is usually impossible if only data by attributes are available;

  • the estimates can be used in machining industry in inspection and acceptance of lots of pieces as well as in monitoring and control of the manufacturing process.

The estimate of mean μ gives the value of correction to make process closer to its target, if necessary. This estimate is robust, i.e., lowly sensitive to one or two outliers in a sample.

Using the estimates in process control allows improving quality of articles in machining industry through their tolerance without increasing expenses for precise measurements. It is very important for the modern high-performance industry.

Acknowledgements

This work was financially supported by the Ministry of Education and Science of Russian Federation in the framework of the state task No. 9.7889.2017/8.9.

The work is carried out on the equipment of the Center of collective use of MSTU “STANKIN”.

References

  1. J.-M. Judic, Process tolerancing: a new approach to better integrate the truth of the processes in tolerance analysis and synthesis, Procedia CIRP 43 (2016) 244–249 [CrossRef] [Google Scholar]
  2. A. Azizi, Evaluation improvement of production productivity performance using statistical process control, overall equipment efficiency, and autonomous maintenance, Procedia Manuf. 2 (2015) 186–190 [CrossRef] [Google Scholar]
  3. W.A. Syed, Statistical process control for total quality, Johns Hopkins APL Tech. Dig. 13 (1992) 317–325 [Google Scholar]
  4. B. Mason, A.J. Jiju, Statistical process control: an essential ingredient for improving service and manufacturing quality, Manag. Serv. Qual. 10 (2007) 233–238 [CrossRef] [Google Scholar]
  5. P. Gejdoš, Continuous quality improvement by statistical process control, Procedia Econ. Financ. 34 (2015) 565–572 [CrossRef] [Google Scholar]
  6. D. Kiran, Total quality management. key concepts and case studies, Butterworth-Heinemann, Oxford, 2016 [Google Scholar]
  7. M. Eben-Chaime, Mutual effects of defective components in assemblies, J. Manuf. Syst. 36 (2015) 1–6 [CrossRef] [Google Scholar]
  8. N. Gaytona, P. Beaucaire, J.-M. Bourinet, E. Duc, M. Lemaire and L. Gauvrit, APTA: advanced probability-based tolerance analysis of products, Mécaniques & Industrie 12 (2011) 71–85 [CrossRef] [Google Scholar]
  9. K. Palmer, K.-L. Tsui, A review and interpretations of process capability indices, Ann. Oper. Res. 87 (1999) 31–47 [CrossRef] [Google Scholar]
  10. ISO 11462-2:2010 Guidelines for implementation of statistical process control (SPC) − Part 2: Catalogue of tools and techniques [Google Scholar]
  11. ISO 22514-2:2013 Statistical methods in process management − Capability and performance − Part 2: Process capability and performance of time-dependent process models, IDT [Google Scholar]
  12. E. Álvarez, P.J. Moya-Férnandez, F.J. Blanco-Encomienda, J.F. Muñoz, Methodological insights for industrial quality control management: the impact of various estimators of the standard deviation on the process capability index, J. King Saud Univ. Sci. 27 (2015) 271–277 [CrossRef] [Google Scholar]
  13. S. Konov, B. Markov, Algorithm of correction of error caused by perspective distortions of measuring mark images, Mechanics & Industry 17 (2016) 713 [CrossRef] [EDP Sciences] [Google Scholar]
  14. I. Smurov, M. Doubenskaia, S. Grigoriev, A. Nazarov, Optical monitoring in laser cladding of Ti6Al4V, J. Therm. Spray Technol. 21 (2012) 1357–1362 [Google Scholar]
  15. ISO 7870-1:2014. Control charts − Part 1: General guidelines [Google Scholar]
  16. ISO 7870-2:2013. Control charts − Part 2: Shewhart control charts [Google Scholar]
  17. G. Capizzi, G. Masarotto, An adaptive exponentially weighted moving average control chart, Technometrics 45 (2003) 199–207 [CrossRef] [Google Scholar]
  18. J. Park, C.-H. Jun, A new multivariate EWMA control chart via multiple testing, J. Process Control 26 (2015) 51–55 [CrossRef] [Google Scholar]
  19. A. Korzenowskia, G. Vidorb, G. Vaccaroac, C. Ten Catend, Control charts for flexible and multi-variety production systems, Comput. Ind. Eng. 88 (2015) 284–292 [CrossRef] [Google Scholar]
  20. L.H. Tippett, The efficient use of gauges in quality control, Engineer 177 (1944) 481–483 [Google Scholar]
  21. A.E. Mace, The use of limit gages in process control, Ind. Qual. Control 8 (1952) 24–31 [Google Scholar]
  22. E.R. Ott, A.B. Mundel, Narrow-Limit Gaging, Ind. Qual. Control (1954) 21–28 [Google Scholar]
  23. M.I. Rozno, Process control based on narrow-limits data for attributes, Metody Menedzhmenta Kachestva 12 (2001) 27–33 [Google Scholar]
  24. F. Aparisi, E.K. Epprecht, J. Mosquera, Statistical process control based on optimum gages, Qual. Reliab. Engng. Int. (2017). (DOI:10.1002/qre.2135) [Google Scholar]
  25. S. Steiner, P. Geyer, G.O. Wesolowsky, Control charts based on grouped observations, Int. J. Prod. Res. 32 (1994) 75–91 [CrossRef] [Google Scholar]
  26. S. Steiner, P. Geyer, G.O. Wesolowsky, Shewhart control charts to detect mean and standard deviation shifts based on grouped data, Qual. Reliab. Eng. Int. 12 (1996) 345–353 [CrossRef] [Google Scholar]
  27. D.A. Masterenko, Choice of the best estimate for the measured value from strongly discretized observations, Meas. Tech. 57 (2011) 764–768 [CrossRef] [Google Scholar]
  28. D.A. Masterenko, Statistical estimation of measured quantities from strongly discretized observations with unknown scale parameter of the random component, Meas. Tech. 55 (2012) 654–658 [CrossRef] [Google Scholar]
  29. D.A. Masterenko, Increasing of precision of informational-measuring systems in automated manufacturing based on the methods of statistical processing of strongly discretized observations, Dr. Tech. Sc. Thesis, MSUT Stankin, Moscow, 2015 [Google Scholar]
  30. G. Kulldorff, Contributions to the theory of estimation from grouped and partially grouped samples, John Wiley & Sons Inc., New York, 1961 [Google Scholar]
  31. D.A. Masterenko, Advantages gained with the use of methods of statistical processing of discretized observations in coordinate measurements of intricately shaped surfaces, Meas. Tech. 58 (2015) 766–771 [CrossRef] [Google Scholar]
  32. D.A. Masterenko, V.I. Teleshevskii, Features of numerical processing of measurement information for high-precision linear and angular measurements, Meas. Tech. 59 (2017) 1254–1259 [CrossRef] [Google Scholar]
  33. L. Ren-fen, H. Deng-Yuan, On some data oriented robust estimation procedures for means, J. Appl. Stat. 30 (2003) 625–634 [CrossRef] [Google Scholar]

Cite this article as: D.A. Masterenko, A.S. Metel, Estimation of process capability indices from the results of limit gauge inspection of dimensional parameters in machining industry, Mechanics & Industry 18, 702 (2017)

All Tables

Table 1

Observed values in the laser interferometric system.

All Figures

thumbnail Fig. 1

The principle of monitoring and control by attributes by narrowed limits. The explanations are in the text.

In the text
thumbnail Fig. 2

Some examples of operating characteristics of tests with rejection rule , given as the functions of the ratio . Gauge limits distance is UCL − LCL = 4σ0.

In the text

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.