Understanding the underlying behaviour of a manufacturing process

Understanding the underlying behaviour of a manufacturing process

January 2022

By David Griffin, Principal Consultant - Manufacturing Innovation & Automation at 42T

Smart investment in better knowledge can reap big rewards downstream

Flat line design of learning process, brain process, creativity, innovation, learn to think.

There’s a simple principle in user interface design that the UI should faithfully represent the underlying functionality of the system.  That way, what the user is led to believe is happening and what is actually happening match up, and the interface remains intuitive in all situations. 

When this is not true, however, users can become confused with how the system reacts once outside of the normal day to day envelope of operation, with potentially serious consequences.

All around us are examples of interfaces which feature just such a mismatch.

Some highly computerised modern aircraft have been implicated in situations where pilots under great stress received apparently clear warnings but misunderstood the underlying situation. Cockpit voice recordings of highly trained staff saying “Why did it do that?” are testament to the disconnect between what the UI has led them to believe is happening and what is actually going on beneath the surface.

Sometimes the mismatch between the internal workings of a system and the model presented to the outside world is simply poor design.  And sometimes it is a deliberate (and perhaps with hindsight, misguided) attempt to protect the user from the true complexity. 

Until things stray off the beaten track this may be fine, but once somebody has to react to a situation that wasn’t in the manual, the absence of a correct mental model representing the underlying system becomes at best frustrating and at worst potentially hazardous.  Personal computer software is famous for hiding things “you don’t need to know” with the result that the behaviour when something goes wrong can seem bewilderingly illogical.

Understanding the underlying behaviour of a manufacturing process

It’s not just aircraft or computers where this is a problem, though.  Patients may often retain a mental model of how their illness or their medication functions that is at odds with reality (either because it was simplistically explained or not explained at all);  resulting in counterproductive behaviour when faced with a decision (“I stopped taking the tablets because I felt better”).

In many ways this is just like the challenge of delegation.

If you want to successfully delegate a complex task to someone, that person needs to know a lot more than just the steps you think they’ll need to follow.  If not as soon as they face a decision that you did not anticipate, they’ll either come straight back to you or else potentially choose the wrong course of action based on their limited appreciation of the bigger picture. 

But if they know the background, and the reasons behind the job, they’ll stand a chance of making the correct decision alone.  But telling them everything takes so long it detracts from the original value of delegation. It’s an age old trade-off.

'Black Box' manufacturing processes

There’s a particular example of 'underlying models' that is often seen in the operation of manufacturing process machinery (especially in the food and drink sector).  Different operators choose to react to the same events in different ways.  One may see a certain quality related measurement start to fall and choose to adjust a temperature, whereas another will see the same thing and adjust a belt speed.

Understanding the underlying behaviour of a manufacturing process

It’s not because one (or both) of them is daft (or lazy).  And it’s usually not because only one of them knows the “correct” answer.  The difference is in their underlying mental models for how the process actually works.  (Not how the machine works.  That’s usually beyond dispute). 

The most experienced operators may have built up a sort of empirical understanding of the process (“If I turn up this temperature, the product will start coming out OK in 10 minutes or so”).  But no-one will ever have clearly explained (or documented) exactly how the process really works.

If this were two doctors prescribing different drugs for the same set of symptoms we’d (rightfully) say “isn’t this supposed to be evidence based?”  But in the case of a “black box” food manufacture process with variable feedstock materials (that might be natural products) and large complicated thermal systems, there may be no-one who knows the true model characterising the process - and no clinical trial data from which to infer it empirically!

Best practice behaviour

Many companies will do the next best thing to understanding the model, which is to create a flow chart of best practice behaviour to follow in the event of the process appearing to wander out of control.  At least then all operators behave the same, and chances are that if the empirical findings of the most experienced operators are encoded into this best practice, it will represent a fairly good behaviour pattern most of the time.

Over the long term, though, this does not take the organisation any closer to learning the true behaviour of the system and risks “deskilling” the operators who now lose some ownership and may be less motivated to look out for early warning signs beyond the official measurement regime.

And when the same process is replicated in a similar (or even a scaled up) machine, there’s a chance that unappreciated changes in parameters no-one realised mattered will render the best practice from the original line almost useless.

It’s not a mystery why these sorts of processes are not clearly understood, of course.  Such understanding is expensive to come by and generally involves testing that risks disruption to production output.

But sometimes there is a benefit to rethinking the sort of testing that is carried out.

Rethinking testing

The traditional sort of testing consists of varying the parameters that are inexpensive to vary (within a range that hopefully won’t cause a large amount of scrap) and measuring the effect on the output.  The ultimate goal of this is to (empirically) find the optimal setting of the input parameter for best output.

If on the other hand the goal is to understand the system, (to characterise its transfer function, so to speak) then the process starts with generating hypotheses.  The hypotheses may each posit a different mechanism for something going on inside the black box that is the process.

The next step is to do as much of an analytical sanity check on each hypothesis as possible.  Some just won’t stand up to scrutiny when the thermodynamics of the mechanism are thought through, for example. This analytical step should result in an initial cull.

Understanding the underlying behaviour of a manufacturing process

Then we look at all the input variables we could change, and all the output variables we could measure, and we ask “if hypothesis A was correct, and we raised input 1 ,what would that do to output 3?”

Some tests, it will become clear, are not worth doing because the results would be the same regardless of which hypothesis was true.  Others, however, might be expected to show clear differences and help us confirm or refute some of the hypotheses.

Note that the key output variables being measured may not relate to actual output product quality at all.  The tests may result in better, worse, or unchanged product.  But that is largely irrelevant. The goal in these tests is not to improve the process (yet) but to understand it.

If the input and output variables are easy to measure, then it makes sense to precede any active testing with a period of recording baseline data when nothing (intentionally) changes.

If holding input 1 steady results in output 3 fluctuating wildly over a 24 hour period, it’s worth knowing before embarking on a 2 hour test!

Naturally it makes sense, when capturing baseline data, to record data for all the variables you cannot control but which you suspect might have an effect.  You don’t want to crowd round a week of baseline data and hear someone say “Do you think we should have logged ambient temperature?”

Then the hypotheses should be tested against the baseline data before you conduct the “real” tests.  If there’s already an anomaly then it may be worth re-evaluating the hypothesis in question. And of course, the baseline data informs us what sort of time period a typical test needs to run to average out background noise.

Finally the 'real' tests should be carried out and evaluated objectively.  (Not tweaked midway through because it seems to be making really nice product, for example).

We need to keep in mind that the aim is to confirm or refute one or more hypotheses and if the outcome is that none of them seem to hold up then useful knowledge has still been gained.  This will probably result in the generation of new, better, hypotheses.

The goal in all of this is understanding.  In a world where many competitors may be operating similar assets, better understanding represents a competitive advantage, and one which will be inexpensive to replicate across multiple sites using similar processes once it has been generated.  It may represent those last few percent you’ve been looking for.


David Griffin

If you would like to find out more, contact David:
answers@42T.com | +44

David is a Principal Consultant at 42T and is an industry experienced mechanical engineer who spent a decade developing bespoke test and assembly automation, in diverse fields ranging from motor winding to asthma inhaler manufacture, several years developing and optimising solvent removal technology and continuous chemistry systems. He also spent a period in industrial inkjet printing system development.

Share this article:

Background

What will you ask us today?

We believe in asking the right questions to drive innovation; when we know the right questions, we generate the ideas to answer them.