Tim's blog
www.timdavis.co.uk
Friday, 27 July 2012
Dr Genichi Taguchi died last month. I met Dr Taguchi several times in the 1990's (http://bit.ly/QOfCn7) when he came to Ford to help us. From him, and Don Clausing from MIT (who died in 2010) I learned most of my engineering. Like many others who have made fundamental contributions to our understanding of the way the world works, he simply reversed the order of things we thought we were already doing in the right order. It used to be thought that a design should first be made to achieve a target, and then the variability should be reduced. This approach often resulted in tighter than necessary tolerances, and thus higher costs. Dr Taguchi reversed this sequence; first reduce the variability (by finding design factors that interact with noises factors - the variation), and then work out how to get the thing on target. It's a beautiful idea, still not widely taught to undergraduate engineers.
Tuesday, 21 September 2010
A collection of my blogs for the 2010 RSS conference
THURSDAY, 19 AUGUST 2010
I have started to think about what I want to say on Statistical Engineering at the RSS conference in Brighton next month. I have been thinking a lot about the iterative learning cycle involving the interchange between inductive and deductive logic; ahttp://www.rss.org.uk/main.asp?page=3162s statisticians, do we pay enough attention to this distinction? It seems to me that the scientific context of the problems we are involved in solving should play a central role in this iteration. I will say something about this with regard to engineering problems.... but what do statisticians working in other fields think about this? Your thoughts ahead of conference would be welcome...
WEDNESDAY, 25 AUGUST 2010
Some statistically based initiatives aimed at solving engineering and quality problems often tend to over-emphasise empirical methods at the expense of deductive logic; the Six Sigma movement is a good example of this – the problem solving algorithm of Define, Measure, Analyze, Improve, and Control (see for example http://en.wikipedia.org/wiki/Six_Sigma for some background to what these steps entail) puts great store in solving problems by measuring lots of characteristics, and analysing the resulting data. However, Six Sigma has nothing to say about eliminating hypotheses through deductive logic. In my Brighton talk, I will introduce a simple method to facilitate this step in problem solving and root cause determination, so that an empirical approach using statistical methods can then be better targeted. This method is not taught in Six Sigma classes (groping around in Minitab output for “significant” p-values seems to be the preferred approach), or even referenced in statistical texts; which is strange given the central role of statistical methods generally in problem solving.
MONDAY, 30 AUGUST 2010
My previous comments on problem solving lead me to think about how I might illustrate the use of statistical methods in directly solving engineering problems. I have been involved in many interesting and challenging problems in my 30 years in the automotive industry. The recent media coverage of both the Toyota problem with sticking accelerator pedals and the BP oil spill in the Gulf of Mexico caused me to think back to my involvement in a similarly high profile case - the Firestone tire crisis of 2000/1, which resulted in around 300 fatalities and a $3Bn recall of 20 million tyres. There are many similarities in all three of these cases (not least the role of the media, and government agencies), but in the case of Firestone, I will show how a range of statistical methods was used (from simple EDA methods like box plots, to more sophisticated methods such as competing risk proportional hazard regression) to get to the root cause of the problem, and to quickly get ahead of the game, and decide on what actions to take, before the regulatory authorities told us what to do.
SUNDAY, 5 SEPTEMBER 2010
High profile cases like BP, Toyota, and Firestone bring into sharp relief the subject of engineering for reliability. As statisticians, we seem to have got everybody from ourselves, to scientists & engineers, to senior management and to regulatory authorities, comfortable with the idea of expressing reliability as a probability. Indeed, in media interviews, the BP CEO quoted a failure probability of “about 10-5” for the oil rig that exploded causing the spill. In his investigation into the 1986 Challenger disaster, when NASA management had quoted a similar probability for the reliability of the Space Shuttle, Richard Feynman said in his report into the accident “What is the cause of management's fantastic faith in the machinery?” Probability measures for reliability may be appropriate for some fields of engineering, but I will introduce an information based definition that is better suited to many engineering situations (including automotive) where the probability definition simply can’t be measured. I will argue that the focus should be on evaluating the efficacy of counter measures for identified potential failure modes, and the statistical methods required to evaluate this efficacy are much different to those required in attempting to measure reliability through a probability.
THURSDAY, 9 SEPTEMBER 2010
In engineering, reliability problems come about for essentially only two reasons 1) mistakes, and 2) lack of robustness. Genichi Taguchi did much to bring to our attention the idea of robustness (making designs insensitive to variation, or “noises”), although others had been there too, notably RSS Fellow and Greenfield medallist Jim Morrison as far back as 1957. Taguchi had some important things to say about strategies for improving robustness, one being that engineers should first look to desensitize their designs to variation through experimenting with design parameters related to geometry, material properties and the like, and not to choose the more obvious path of trying to reduce or eliminate the noises. I will explain some of Taguchi’s ideas, and hope to demonstrate that he didn’t deserve some of the attacks on him by the statistical profession at the time, in stark contrast to the way our profession seems to have embraced the Six Sigma movement with nothing like the same scrutiny afforded to Taguchi’s work.
MONDAY, 13 SEPTEMBER 2010
One of the best methods that I have come across which exemplifies the inductive-deductive iterative nature of statistical investigations (see my first post) dates back to 1914 – the so-called “Pi” theorem of E Buckingham; I will illustrate the use of the “Pi” theorem using the well known paper-helicopter experiment, which many people who have taught statistical methods to engineers will be familiar with. If we adopt a completely empirical approach, we might decide to run a response surface experiment to model the flight time of the helicopter as a function of various design parameters; three design parameters might require about 15 runs in the experiment to develop the transfer function. However, if we think for a minute about the physics, we know that the flight time will be a function of the mass of the helicopter, and the area swept out by the rotors, together with the force due to gravity, and the density of air – and all of these quantities are known. The application of the “Pi” theorem, which reduces the dimensionality of the problem, and does not require linearity to ensure dimensional consistency, reveals that the number of experimental runs can be reduced to about three. It is a mystery as to why the “Pi” theorem isn’t referenced in any of the classic texts on response surface methodology and design of experiments; is it because not enough statisticians are interested in engineering?
Monday, 13 September 2010
RSS International Conference in Brighton
I have uploaded a number of posts to the Royal Statistical Society blog for the upcoming international conference in Brighton - see rssconference.blogspot.com
Subscribe to:
Posts (Atom)