Companies involved in the scientific instrumentation business, such as Oxford Instruments, can tend to become a little obsessed with technical specifications. Every time we develop and launch a new product, we pore over the performance specifications and make sure that it meets our desired targets. These specifications are then placed front and centre of any marketing campaign, especially those that we feel will “beat” any competitor system. It’s no great surprise to then have tender specifications for a whole host of microanalysis products such as EDS and EBSD systems being based on supposedly key specifications such as speed, resolution, sensor size or hit rate. But do these specifications really matter?
In a previous career, I was responsible for a number of SEMs in a microscopy lab; on our flagship SEM we had an EDS detector that supposedly had the best energy resolution and the highest throughput of X-rays of any on the market at the time. This sounded absolutely wonderful – the best of both worlds. However, it soon became apparent that the system was not quite as good as we had hoped; for sure it met its specifications, but the problem was that the position of the peaks in the X-ray energy spectrum were not stable, and a full recalibration of the system was required at frequent intervals. In addition, the energy resolution was great at very low count rates, but the resolution and peak-shape were truly awful at very high count rates – so we couldn’t analyse quickly and ensure good quality data. And then there was the software itself – not at all ideal for the multi-user facility that it was used in. The next time we were buying an EDS system we made sure that it was not all about the specifications: indeed, could we really tell the difference between 125 and 127 eV Mn Ka resolution? No, far more important were the performance at typical operating conditions, the ease of use of the software and the quality of local after-sales support.
This brings me around to some recent discussions I have been having with some of our customers who are using our Symmetry EBSD detector. As is fairly typical for Oxford Instruments, we have set some quite conservative specifications for the performance of the detector, guaranteeing 3000 patterns per second (pps) on a recrystallised steel or Ni sample using no more than 12 nA beam current and with indexing rates over 99%. On most systems we routinely exceed these specifications, with speeds of 3500 pps quite normal using beam currents significantly below 10 nA. However, how important are these specifications? As a guideline of the performance during routine analyses, they are probably not very useful: most laboratories using EBSD do not spend much of their time analysing the simple recrystallised metal samples that we use during specification testing. However, as a guideline of detector sensitivity, they can be extremely useful. Many of the materials that we analyse using EBSD require higher electron doses to get good, indexable diffraction patterns than are required on recrystallised steel samples: so if the specification suggests you need 10 nA to achieve the top speed, then on more typical “real world” samples you will either need much more than that or would have to dramatically slow down the analysis to ensure good quality data. I was trying to convey this message to our customers, and ending up drawing a schematic illustration to show how different sample types would require different electron doses, and thus would affect the resulting speeds of data collection:
Of course, such a diagram should only be taken as a guideline, but it does indicate how knowledge of our samples can help us to optimise the performance of our EBSD (or EDS) system. It also would allow an assessment of how a detector with a poorer performance than Symmetry might perform: if the detector specification was for a current of >20 nA to achieve 3000 pps on a recrystallised steel (i.e. twice the dose required by Symmetry), you may well require 40-70 nA to achieve 1000 pps on a simple geological sample. The implications for subsequent sample damage and loss of resolution are very clear to see.
A month or two ago, I was discussing an analysis of a cracked steel sample with a colleague within Oxford Instruments. I had analysed an area of this duplex steel sample using AZtec’s Large Area Mapping functionality, with 2 main objectives: firstly to collect a very large dataset (> 60 million analyses) in order to test AZtecICE, our new EBSD data processing software (more about this will follow in the coming days…) and secondly to generate a visually appealing map to be used on one of our summer exhibition booths. I used a beam current of 14.5 nA and an analysis speed of 2375 pps using Symmetry, and my colleague wanted to know why I didn’t set the detector to run faster than the specified 3000 pps. I explained that the grains around the crack tips were highly deformed and required a slightly higher electron dose in order to give good diffraction patterns, and that data quality was more important than speed. Given that the overnight analysis was still more than 130 million points, and the microstructures were beautifully resolved as shown in the image below (taken from < 1% of the total map area), I think I was right to avoid getting too obsessed by specifications and top speeds. Hopefully you will see the full image for yourself in the coming months and will be able to judge yourselves!
EBSD map of a cracked duplex steel, highlighting the plastic strain in the FCC phase.
What results are trying to achieve with your analysis? Leave a comment below to talk to myself or one of the team to make more out of your microanalysis system than just the technical specifications.