Many years have passed since the first failure modes and effects analysis tools were developed in the mid-20th century. FMEA (the acronym for Failure Modes and Effects Analysis) was initially used in the military and aerospace sectors. In the 1970s, Ford adopted the method, which pushed it into the automotive sector. It wasn’t until the 1990s when it was standardised by the Automotive Industry Action Group (AIAG) that it was transformed into the most important risk analysis tool.
This analysis is mainly focused on design (DFMEA) and process (PFMEA). Since then, there have been various versions of the manual explaining the process for carrying out FMEA. In recent years, we have grown accustomed to the fourth version of the manual, and its risk rating tables. These tables classify the rating (between 1 and 10) that we give to three concepts that we assess when carrying out an analysis, which are severity, probability, and detection.
Severity tells us the seriousness of the effect caused by a possible failure, with our focus mainly turned towards the final user’s safety. Risk of harm for the user is always evaluated with a maximum rating. Those that affect non-compliance with legal or regulatory requirements also receive high ratings.
Occurrence evaluates the likelihood that a failure will occur and, although the standard has rather precise evaluation tables with PPMs or CpKs, assessment is sometimes difficult due to a lack of sufficient records or information. Often times, assigning a rating is the most complicated process, provoking greater debate among team members. The greatest risk is that the team will fall into meaningless debate and lose focus of what a FMEA is really looking for, which is anticipating possible problems and defining actions that eliminate, or at least reduce, the probability of a specific failure occurring and, if a failure does occur, that it has as little impact as possible. Resolving the issue through design is always the ideal way to avoid the failure’s occurrence, or at least limit its probability through redundancies. In the case of electronic manufacturing processes, poka-yokes are the best option, so that incorrect assembly is impossible.
However, actions often tend to be based on increasing detection, and automatic modes are the most efficient in this case. For this reason, test equipment in electronics manufacturing is the filter that allows defects to be reduced, and keeps defective products from reaching the client. Furthermore, the more automated this process is, the more effective it will be, and so it will be assigned a lower FEMA rating.
To date, this rating criteria and its multiplication is what has provided the RPN (Risk Priority Number). However, this risk evaluation did not always adequately prioritise actions to take, given that it values severity, occurrence, and detection equally. Yet everyone will agree that a risk to human life will always take priority over any other risk with a higher RPN rating that does not put the user in danger.
Since 2015, AIAG and VDA (The German Association of the Automotive Industry) have been working to consolidate a new methodology for carrying out FMEAs. This will lead to the publication of a new version of the manual, which will be edition 5. We will all have to adapt to this new standard, which presents changes to the current system. One of the most important changes is the manner of rating and prioritising. The RPN disappears, with a new concept called the AP (Action Priority) appearing in its place. This classification provides different weights for severity, occurrence, and detection, and the direct result of the AP index is a high, medium, or low classification of the action’s priority (not the classification of risk).
The FMEA process is represented in six steps:
1.- Definition of scope.
2.- Analysis of the structure.
3.- Analysis of the function.
4.- Analysis of the failure.
5.- Analysis of the risk.
Before us lays work to be done to train and adapt tools and procedures to this new way of handling FMEAs. There has been debate and discussions on whether this new focus is better or worse than the previous one. Time will tell if the result of these last three years of alignment between AIAG and VDA will provide us with a method that continues to adapt to change, and continues to be valid more than a half century after it first appeared.