During early 2019, Fertaid will be upgrading elements of QAPonline.
QAPonline and External Quality Assurance
Accreditation requires participation in an external quality assurance program (EQA). Unfortunately, it is often acknowledged that without mandatory participation, many laboratories would not bother in subscribing to an EQA. Initial EQA programs were based upon biochemical techniques that were performed on a variety of machines and platforms. These schemes involved the dispatch of samples that were run in a normal manner.The results were submitted to the EQA body who returned an assessment of the machines value compared to other machines of the same or different platforms. The aim was to ensure that the operation of the machine produced a result not inconsistent with others. The disappointing aspect of biochemical based EQA was than there was more variation between platforms than within platforms! For years I been amazed by the desire for a laboratory to perform very well within their own method but can be significantly different to other methods. The answer of course is that different methods should not exist. After all, how can an oestradiol value differ between different methods. It shouldn't but it does. It does not make is OK though.
Application of an EQA principle to Reproductive medicine followed the same process of sample dispatch and analysis, submission and reporting. However, many of the processes being surveyed were not performed on a machine but visually and often by one member of a larger team. The aim was theoretically to test the the sample preparation and and assessment often with various platforms. The EQA scheme therefore sees parallels between a machine based EQA program and a skills based EQA program. The problem is that each machine and its platform ( reagents, reactions, etc) have been extensively quality managed before delivery and error reflects one of maintenance, implementation or protocol. Visual based EQA largely have little quality management and requires that skills of each and every operator to be consistent over time. Many laboratories attempt to manage this by either submitting a consensus value derived from the mean of all staff or perform an Internal Quality Assessment after the results have been returned.
In most laboratories, one machine will perform all the analysis of a particular reaction and therefore an EQA submission reflects both that machine and the whole laboratories performance. Visual based analysis are the opposite whereby the EQA should reflect only one staffs performance but the laboratory performance will be derive from each and every staff member. NATA rules require each analyser to be part of an EQA programme where possible and by the same logic, each staff member should also be enrolled.
It is my impression that many EQA schemes in Reproductive medicine argue that the sampling and preparation is a critical aspect of a samples final value. In reality, the sample preparation often is a simple as placing a small volume of sample on a counting chamber or on a pre-stained slide. Hardly rocket science and unlikely to alter the final counting. Far more likely is the sample itself is inherently unstable since most EQA schemes use fixed sperm samples. The variability is therefore more in how a scientists re-suspends the fixed sample than any other physical process. QAPonline argues the largest source of variation is between staff and that this dominates the variability in EQA submissions and that variations due to sample resuspension due to fixation add a degree of variation not normally experienced in a laboratory. Therefore, a visual based process is really about proficiency testing rather than mechanical quality assurance.
Proficiency Testing (PT) and Quality Management.
Proficiency testing and external quality assurance is often used interchangeably with proficiency testing more a US term and EQA more an European term. QAPonline prefers to use PT for visual orientated EQA schemes and EQA for more mechanized, physical based sample assessments.
After more than 30 years in IVF, it is my belief than there is little technical difference between scientists once they have been fully trained and orientated. This is not to say that for some procedures such as ICSI or vitrification, skills levels will vary with dexterity and experience. Current Quality Management Systems should have written protocols for all procedures and require periodically assessment that all staff are following them. Apart from the recurring trend for scientists to deviate from the protocols since they often "feel" an alternative process is better, the technical aspects of IVF are relatively stable. However, there is less effort afforded to the visual decision making of each scientist. Indeed, is seems to me that it is assumed all scientists make the same decisions and that they are competent. My experience from QAPonline suggests that this is not true. Since nearly all decisions are made by ONE scientists looking down a microscope, no one really knows if the same decisions are being made. Is this really important anyway?
At the end of the day, one real question all scientific directors, medical directors and patients should ask is - will my treatment or chance of pregnancy vary depending on which scientist performs one of many procedures, the key ones being selection of embryos for transfer or cryostorage, should we have IVF or ICSI or when should a hCG trigger be ordered. Clearly, and not surprisingly, these are subjects for QAPonline's EQA programmes. Before ICSI and before single embryo transfers, semen analysis and embryo transfers were almost independent of the scientist. When 2-4 embryos are transferred together, surely the "Best" embryo will be transferred and result in pregnancy. However now both procedures dominate, variation between scientists becomes increasingly important. True, one could argue if the "Best" embryo is not selected by one staff member but frozen, then a pregnancy may not occur following the fresh transfer but may occur in a subsequent FET event. If one staff member scores morphology higher than others, IVF rather than ICSI may be recommended by the Fertility Specialist. Maybe this does not matter since both may produce similar outcomes. Consider this however, once an embryo has been frozen, there is an expectation of quality and an implied requirement to undertake a frozen embryo cycle(s) to have these thawed and if they survive, transferred. Consider also that while IVF and ICSI appear similar in risks, adult health and well being remain unknown. If in the future, ICSI adults face an increased risk of a condition, unnecessary referral to ICSI may lead to unnecessary risk if semen analysis resulted in inappropriate ICSI referral. In many clinics, ICSI is charged at a significantly higher rate adding a potentially additional unnecessary cost burden on clients. A scientist performing a semen analysis is often quite removed from the clinical recommendations that flow from their visual assessments but they are often real.
Ask any of your clients however and the response will be quite different. Not only may it result in additional cost or treatment but may also change the likelihood of success. Furthermore, no client will be impressed if one embryo is discarded by one staff member but may have been frozen by another or visa versa since one implies loss of a chance of pregnancy and the other an unnecessary cost and failed outcome.
This issue is not limited to IVF. Any visual bases system has the potential for variation in interpretation. Histopathology and ultrasound are two good examples. It is inconceivable that identification of a medical condition such as cancer could vary with the skill level of the scientist or medical practitioner who performs a test. Likewise, the chance of pregnancy following IVF should not depend on who is making the decision about embryo fate. It would seem reasonable that the clinic management would require certification that the fate of each embryo is the same regardless of scientist. Proficiency testing at a relatively frequent and continuous rate is the only way such assurance can be provided to patients and clinic management.
IVF is relatively unique in that microscope based decisions dominate the process and are often made by trained, competent but often junior staff members. There are few PT schemes for embryologists, no PT schemes for ultrasonographers but there are many EQA schemes for Andrology. These schemes are based upon a few samples dispatched each year as fixed samples, where one staff member in a laboratory should report on the sample and the result sent to the EQA management as a reflection of the laboratories performance. Once the results have been processed, the laboratory is expected to ask each staff member to repeat the assessment and the the quality manager analyses the results to determine the competency of each staff member.
This is satisfactory at one level but QAPonline's opinion is that there are significant limitations. The limitations include the following. Firstly, it is not mandatory that all staff are involved in the Internal EC and that many laboratories may perform this with varying degrees of rigor. It is assumed that during accreditation, the IQA will be confirmed and accepted. Secondly, it is difficult to trace individual staffs performance over time and that each staffs performance may not be part of their own professional experience in so far as if moving to another laboratory, the data most likely will not move with them. There may be some clinics who may facility this but they have been rare in my experience. Thirdly, the number of times these are performed is low each year making it difficult to estimates trends.
In summary, QAPonlines Charter if you like is that quality systems are built from the performance of all individuals since on any one day for any one sample, the laboratory needs confidence that regardless of who is performing the assessment, the interpretation of the sample will be the same or at least sufficiently similar that any clinical consequence will be the same.
Furthermore, the performance of each individual is repeatedly tested and reaffirmed since the laboratory cannot assume that an individual will always make the same interpretation over time. The summation of an individuals repeated assessments therefore constitutes a summary of each individual professional skills and should form part of their CV. As such, it is not unreasonable to ask a potential employee for a history of their performance in order to gain an idea of the impact of employment of a individual on the laboratories future EQA's performance. One aspect of QAPonline is that this is specifically a consequence of targeting the individual rather than the laboratory for the EQA activity. There should also be a commitment by senior scientist to ensure the same interpretations are made by future scientists. Only by repeated and continuous assessment can ones skills be maintained and the skills of future scientists assured. QAPOnline does not accept that senior staff should be excused from a continuous ongoing quality assessment program on the basis " they know what they are doing".
True Value and Peer Estimations.
One criticism of most EQA schemes is what is being used to compare a laboratories performance. In endocrine EQA schemes there is rarely a notion of a PEER laboratory. They should but they do not use a true reference value estimated using high technology such as Mass Spectrometry etc. However, in visual EQA schemes, the true value is often difficult since the criteria to make assessments has a degree of interpretation, e.g. what is a normal spermatozoa, etc. ESHRE Andrology EQA, I understand, selects the mean and Sd from a selection of "PEER" laboratories. QAPonline has adopted a diferent strategy of Proficiency to determine PEER Status. In this model, individuals who have both a history of activity ( >10 years) and a performance in previous QAPonline schemes of more than 80% of replies within 1 SD are automatically nominated as PEERS. One reason is that no laboratory or skilled individual when asked have volunteered to act as PEERS for the betterment of others. Secondly it uses the concept of demonstrated uniformity in performance. There is no limit to the number of PEERS. It does however act to reinforce the MEAN value as opposed to some "TRUE" value.
Limitations of QAPonline.
Of course, no system is perfect and all have limitations that need to be weighed in its application. The major limitations of QAPonline are:
1. The source material are images rather than actual samples. Therefore there is no capacity to assess handling or processing skills. The flip side of this is that everyone is assessing the same sample so that any variation is in the visual skills of the participant. One caveat of using images is that must be legible to all. One point however, is that in the laboratory, many samples are less than perfect in clarity of the image yet diagnoses are made.
2. There are numerous variations in the method for most EQA skills. Different stains for morphology and different counting chambers for concentrations, etc. I guess this is a fact of life. I would like to make an observation about this. Having reviewed endocrine EQA samples over many years, I remain immensely dissatisfied that various endocrine manufacturers may produce automatic analysers that differ in performance. Even though an EQA scheme may partition the performance of a laboratory between method, it does no make it OK. At the end of the day, referring clinicians need to adjust diagnosis and the interpretation of published material needs to be modified. Likewise the choice of a staining method for morphology make be quicker but the should not be an excuse to allow variations in performance. QAPonlien does provide a range of subgroups in which to compare one performance. In some ways, this puts the assessments of a laboratories EQA In their own hands and not in those of a fixed body.
3. The time frame for completion is open end in QAPonline but not in sample based EQA. This means that the mean value of a release may differ over time and one's performance is modified with recent data updates. QAPonline has utilised the freedom of the Internet to allow for refreshing of data continuously. Interestingly, once the data set has more than 20-30 replies, the mean and SD changes are very modest if at all. The advantage however, is that anyone may complete the sample in any time period. Instead of a laboratory having as series of internal controls, new staff members may complete past activities as a form of competency assessment against previous laboratory submissions.
If you would like to respond to any comments made above, email these to office@fertiad.com and they will be added to the text below.
© FertAid Pty Ltd. 2002 - all pages. All rights reserved.