Tolerance Stacks

Why you need Tolcap for a capable stack tolerace

In this video we take a look at why statistical tolerancing is invoked and examine the assumptions underlying statistical tolerance analysis. We show how Tolcap calculates capable tolerances for the parts in a "stack" and introduce a simple equation that you can use to correctly stack your assemblies component tolerances.

If you would like to comment please join in the conversation on YouTube.

Transcript

Tolerance Stacks

This presentation is about tolerance stacks and particularly statistical tolerancing. An earlier presentation explained why you need Tolcap for checking and setting the tolerances you put on drawings.
If Tolcap is good for individual tolerances, then hopefully you will see that Tolcap is, well, essential for tolerances in combination – in tolerance stacks.

Tolcap for Tolerance Stacks

Statistical tolerancing is almost always misapplied! We'll have a look shortly at why statistical tolerancing is invoked, and how it is usually applied with no consideration of any underlying assumptions. In fact there is a general lack of awareness that there are any underlying assumptions.

Tolerance stacks are not easy – the theory is essentially maths, and I apologise in advance, but the explanation will involve some maths, but it really isn't very advanced, and I will take it slowly.

Maths is necessary to be able to explain:

  • the assumptions of statistical tolerancing;
  • why Tolcap is vital for valid statistical tolerancing;
  • and to show a straightforward method of calculating a sound statistical stack tolerance using Tolcap.

The Tolerance Stack Problem

First let's look at what we mean by a tolerance stack.

We set tolerances for components, but it is often necessary to ensure the parts fit properly when they are assembled together:
The diagram represents an assembly -
three ‘blocks’ dimensioned d1+/-t1, d2+/-t2 and d3+/-t3
are fitted side by side into a ‘cradle’ dimensioned d4+/-t4.
Will they fit?

OK this example looks unrealistically simple, and your real designs will be more complex, but often they do essentially reduce to this problem statement.

Clearly we need d4 to be greater than d1 + d2 + d3 to allow for the blocks being larger than nominal size, but if the difference is too large, the assembly may be too ‘slack‘ when the blocks are smaller than nominal.


So what should we specify for d4?

Tolerance Stack Calculation

Always start with a worst case analysis:
The assembly will always fit if the parts are in tolerance and:
if (d4 – t4) is greater than (d1 + t1 + d2 + t2 + d3 + t3),
or re-arranging that: d4 is (d1 + d2 + d3 + t1 + t2 + t3 + t4)
note we add t4.

More often than not when the dimensions are at the opposite ends of the tolerance band, the assembly would be unacceptably slack -
when all the dimensions come out on nominal, the gap is the sum of the tolerances,
and when d4 is at maximum tolerance and the blocks on minimum, the gap is twice the sum of the tolerances.

At this point, someone will usually say – “Let's use statistical tolerancing!”:
We don't have to add up the individual tolerances: statistical tolerancing lets you ‘root-sum-square’ them (that is square them all, add the squares up and take the square root).
So now d4 equals (d1 + d2 + d3 + √(t12 + t22 + t32 + t42) )
where √ is ‘the square root of’.
This gives a smaller answer for d4, so less ‘slack’ - but is it justified?

Statistical Tolerancing Justification

We know that ‘root-sum-squaring’ numbers gives a smaller answer than just adding them up. Pythagoras knew that – we can see it is a shorter distance along the hypotenuse of a right angled triangle than going round the other two sides, but does it mean anything in this situation?

The basis of statistical tolerancing is - as I said - a mathematical theorem:
Suppose we were to make up a variable - let's call it ‘stack’:
- constructed by adding or subtracting a number of component variables that are all independent (that means the value of one variable is not in any way related to any of the others);
- and further suppose that the probability of finding a particular value obeys a ‘normal’ or ‘Gaussian’ distribution;
- and variable i has a mean Xbari and standard deviation σi.

Now it is a property of the normal distribution that if you do add or subtract variables that are independent & normally distributed (as we want to do to make up this variable ‘stack’), then the result follows a normal distribution.
Further, this variable ‘stack’ will have:

  • a mean which is the sum of the component means
  • and a standard deviation which is the root sum square of the component standard deviations.

The means have to be added algebraically, that is added or subtracted as appropriate – but the sigma squareds of course all add.

Statistical Tolerancing Justification

So that is the theory. If we could travel forward in time:

  • we could measure the dimensions of the components in the stack;
  • assure ourselves they are indeed independent;
  • check that they are normally distributed;
  • and measure Xbar and σ for each one.

Then then we could properly calculate the stack tolerance
- and if our tolerances happen to be the same multiple of the sigma's,
- and if the means all come out at the nominal dimension,
- then ‘root-sum-squaring’ the tolerances will work!

But how do we know that?
And if we don't know – and I ask again how we could know - there is no justification for ‘root-sum-squaring’!
But we have Tolcap!...

 

** If your head hurts, this might be a good point to pause this presentation and walk round the office!


 

Statistical Tolerancing with Tolcap

OK, if you're back now, let's continue to use the same example to explore and try to understand the traditional approach to statistical tolerancing and then we'll take a look at a method for calculating stack tolerances using Tolcap.

We will compare and explore how stack tolerances are calculated as a function of the component tolerances:
tstack is a function of the individual tolerances.
This just means there is some equation, such that tstack is the square root of the sum of all the individual tolerances squared - which just means square each of the tolerances, add them all together and then take the square root.

The algorithms or equations will be given and explained in the general case, but it is helpful to compare approaches. Purely to do that, we will take the specific (if implausible) case where all the component tolerances happen to come out equal: so t1 = t2 = t3 = t4 and it equals t This is done just to make the comparison computations easy. Let's call that stack t*stack.

Some Maths

The following slides will use some maths, so let's get some of the manipulations clear and out of the way:
As explained above, Σdi means add up all the values (and don't forget they may be plus or minus). So in our example, to calculate the nominal ‘gap’:
Σdi = – d1 – d2 – d3 + d4

Recall that √Σσi2 means:

  • square all the sigmas,
  • add them all up (no minuses unfortunately!),
  • and take the square root of the sum.

Note that if we have a constant inside the expression such as ‘c’ in √Σ(cσi)2, that is equal to √Σc2σi2 and equals c√Σσi2, you can put the c squared outside the sigmas, or you can take the c entirely outside of the brackets:
√Σ (cσi)2 = √Σ c2σi2 = c√Σ σi2.

That piece of maths enables us to say for example that:
If ti = 6σi,
then √Σσi2 = (√Σti2)/6
or even √ Σ(4.5σi)2 = (√Σti2)x 4.5/6

Finally, for the purpose of the comparative example, note that for four equal tolerances:
t1=t2=t3=t4=t*,
Σti is 4t* but √Σti2 is 2t*

Traditional Statistical Tolerancing

When statistical tolerancing was conceived, a tolerance of just 3σ was considered entirely adequate!

Assuming component tolerances were set at 3σ, a three sigma stack tolerance was calculated as √Σti2 i.e. root-sum-square the ti's squared.

This procedure was found to give optimistic results, and in 1962 Arthur Bender Jr published a paper which proposed adding a 50% margin to the stack tolerance, thus
tstack = 1.5 times the root-sum-square of the tolerances,
and ‘Benderizing’ is still a mainline traditional approach.

Tolcap predicts Cpk rather than σ, but we can readily make use of Tolcap to analyse the traditional method and then look at how it can be developed and improved.

A Six Sigma Stack Tolerance

While traditional statistical tolerancing works with three sigma tolerances, let's start from the rather more up to date Design for Six Sigma.

DFSS says to set the tolerance to six sigma to allow 1.5σ for the ‘process shift’ i.e. to allow for the other assumption of the maths of normal distributions that the mean of the parts may not match the nominal dimension.

Does this mean that the process shift for every manufacturing process really is 1.5σ ?
No! There is no law of nature that would cause that. It is actually derived from the cunning of the manufacturing people – the shift should be less than 1.5σ, but if necessary a shift any greater than that could be detected in production (using a four-sample Xbar _R control chart). A control chart cannot be too commonly required, so this implies that 1.5σ is generally sufficient.

A Six Sigma Stack Tolerance

The empirical data in Tolcap reflects the reality of the various manufacturing processes and confirms that 1.5σ is sufficient - but not always necessary. The process shift in Tolcap may be as much as 1.5σ, but it will be smaller if appropriate to the process.

Extracting the process shift from Tolcap is no simple matter - it varies across the maps, and the effect of the wizards depends on which issues are being compensated – allowing for a different material will most probably have a different effect from compensating for difficult geometry. So let's use Tolcap taking the Six Sigma approach, that is that 1.5σ gives a reasonable conservatively large process shift.

That is for now - later on we can do a sensitivity analysis to show what happens when the process shift is less than 1.5σ.

Traditional Statistical Tolerancing

To analyse tolerancing algorithms we will get our tolerances from Tolcap.
We will use Tolcap in the mode to find the tolerance we need to achieve a target process capability.
We want sigma values for these tolerances. Let's start as I said with the ‘Six Sigma’ approach and tolerances:

  • Open Tolcap
  • Select a map
  • And select Cpk rather than Tolerance
  • Enter the nominal dimension
  • Set Target Cpk to 1.5
  • Apply the wizards and find what tolerance Tolcap gives
  • Repeat that for each tolerance in the stack

Now we can analyse the traditional approach!

Traditional Statistical Tolerancing

Assume Tolcap has given us ‘Six Sigma’ tolerances:
Each ti = 6σi, based on:
4.5σi to give Cpk = 1.5 plus 1.5σ process shift.

So if we did want three sigma tolerances for the traditional approach, we could halve the six sigma tolerances and get 3σ is ti/2
Our (three sigma!) stack tolerance is then
tstack = √Σ(ti/2) 2 [the root sum of half the tolerances squared]
or 0.5 times the root-sum-square of the tolerances.

The ‘Benderized’ tolerance would be 50% larger, i.e. 0.75 times the root-sum-square of the tolerances.
In the specific comparison case where all the tolerances are equal, remembering that
√Σti2 = 2t
t*stack = √Σ(ti/2) 2 which comes to t
and the Benderized tolerance would be 1.5t.

But do remember that t is a six sigma component tolerance ... and don't we want a six sigma stack tolerance to go with that?

A Six Sigma Stack Tolerance

How could we work out a six sigma stack tolerance?
Maybe we just root-sum-square the component tolerances?
That works for components.
And then tstack = would be the root-sum-square of the tolerances, and for equal tolerances, t*stack would come out to 2t.

Or maybe we still need to ‘Benderize’ the tolerance?
Do we need the full 50% extra?
t*stack would then come out at 3t ....

Or is the traditional absolute correction for three sigma tolerances enough?
That would be an extra 25%, so t*stack would be 2.5t.

To find out, we're going to look at the process shift allowance more closely. This is maybe again a good point to pause and clear your head. Then I'll tell you how we do that.


 

A Six Sigma Stack Tolerance

OK let's look at the process shift allowance more closely.

The process shift recognises that the mean dimension of the parts is not necessarily equal to the nominal dimension on the drawing. Thinking real process shifts:

  • for some processes, such as turning, the shift will depend on how well the setter has set up the batch;
  • for processes such as moulding, the process shift will to a large extent by drift over time as the tool wears.

So the process shift thus includes at least an element which is not variable from part to part but fixed from batch to batch, or drifts very slowly over time. So while it makes sense to root sum square the ‘random’ part to part element of the tolerances (provided they are independent and normal), it may be prudent to combine the process shifts worst case – and simply add them up rather than root-sum-square them.

A Six Sigma Stack Tolerance

On this basis; our six sigma tolerance ti is 6σi.
Process shift is 1.5σi which is a quarter of the tolerance.
Part-to-part variation is 4.5σi which is three quarters of the tolerance.

Then tstack would be the sum of one quarter the sum of the tolerances plus three quarters of the root-sum-square of the tolerances.

Simplifying the equation to make computation easier using the bit of maths we did before:
tstack is sum of the tolerances times 1/4 + the root-sum-square of the tolerances times 3/4.
In the specific comparison case where all the tolerances are equal, t*stack comes out to be 2.5t, and it's tempting to say that this lines up with one of our Benderised projections, but remember this is a special artificial example that happens to use four components.

But here at last is a method! Find 6σ tolerances from Tolcap, add one quarter the sum of the tolerances to three quarters the root-sum-square of the tolerances.

Sensitivity Analysis

Now for a sensitivity analysis the analysis above assumed a ‘Design for Six Sigma’ 1.5σ process shift in the tolerances obtained from Tolcap.
If we knew the process shifts were smaller we would modify our calculation.

For example, suppose the data in Tolcap reflected only 0.5σ process shifts for all the components in the stack. Then a Cpk = 1.5 tolerance will comprise 4.5σ for the short term variation plus only 0.5σ for the process shift: a five sigma tolerance where we expected a six sigma tolerance!

What is the effect of this?
Well the process shift at 0.5σ is now the tolerance divided by 10, and the part-to-part 4.5σ is nine tenths of the tolerance.
And now the tstack is the sum of a tenth of the [sum of the] tolerance
plus the root-sum-square of nine tenths of the tolerances,
which comes out to 0.1 of the sum of the tolerances plus 0.9 of the root-sum-square of the tolerances.

And then if we go to the specific comparison case where all the tolerances are equal, that comes out to
t*stack at 2.2 time the tolerance.

Sensitivity Analysis

So we didn't know we had a five sigma tolerance, but we can have some confidence that our computation assuming ti = 6σi is conservative, and the margin in t*stack would be 12%.

If there were processes such that all the component process shifts were zero, then each ti would be 4.5σ, and we would want to simply root-sum-square the tolerances.
In this case we would find t*stack = 2t.

The margin in t*stack would be 20%.

A 5.5σ Stack Tolerance

The analysis used can readily be applied to Tolcap's default Cpk = 1.33 tolerances.
For the same process and dimension as in the ‘six sigma’ case, we still assume the process shifts are 1.5σ, and for Cpk = 1.33 ,our tolerance needs another 4σ for part to part variation:
So now this is ‘Design for 5.5σ’!

So ti is 5.5σ, 1.5σi is 3/11 ti and 4σ is 8/11 of ti So tstack comes to be 3/11 of the sum of the tolerances and 8/11 of the root-sum-square of the tolerances.

So we have a simple algorithm for tolerance stacks with a minimum Cpk is 1.5 or Cpk is 1.33 to match our component Cpk.

Using Tolcap

We hope this presentation has explained:

  • the assumptions of statistical tolerancing,
  • why Tolcap is vital for valid statistical tolerancing,
  • and a straightforward method of calculating a sound statistical stack tolerance using Tolcap.

For ‘six sigma’ tolerances, use Tolcap to set component tolerances at Cpk = 1.5, and then:
tstack is one quarter the sum of the tolerances plus three quarters of the root-sum-square of the tolerances.

For Tolcap ‘default’ Cpk = 1.33 tolerances, use Tolcap to set component tolerances at Cpk = 1.33, and then tstack is 3/11 of the sum of the tolerances plus 8/11 of the root-sum-square of the tolerances.

Tolcap includes:

  • Calculations for over 80 manufacturing processes
  • FREE trials for business users
  • Low cost business licences
  • No long term contract
  • No set up charges