Products
FIB-SEM
Nanomanipulators
OmniProbeOmniProbe CryoSoftware
AZtec3DAZtecFeatureAZtec LayerProbeTEM
Hardware
EDSUltim MaxXplore for TEMImaging
TEM CamerasSoftware
AZtecTEM
9th June 2021 | Author: Dr Matt Hiscock
Automated particle analysis is a powerful approach for finding and identifying particles from a wide range of sample types. In some cases this may be to understand the relative proportions of a large population of particles, but in others it may be to find and identify small, critical particles from a comparatively large area.
These “needle in a haystack” searches are particularly well suited to automated analysis as they would be very laborious to conduct manually as the vast majority of the time would be spent in trying to find the particles. With automated particle analysis we can define a sample area in which our particles are located and then search that using a set of rules to find our particles and then analyse them to determine their composition and so identify them.
When we perform particle analysis, the typical approach is to acquire an electron image, detect where particles are within that image and then analyse those particles – all automatically. We then move the stage to the next field of view and repeat the process – eventually covering our complete sample area.
When we are searching for these small, rare, particles, our first thought is normally to work at high magnification in order to see them more clearly but this has the implication that, in order to cover a complete sample area, we need more fields of view than if we were to work at lower magnification. This has a significant time implication for the complete analysis as it means more image acquisitions and more stage moves are required – all of which take time.
As such, what we need is an approach where we can work at low magnification in order to cover the area quickly but to still be able to find and accurately analyse the small particles.
One way of making things go quicker – especially in applications where the particles of interest are rare (e.g. gunshot residue analysis - GSR) and therefore the vast majority of the analysis time is spent acquiring images, is to reduce the dwell time that we use per pixel in our image scan. Unfortunately, this can have a negative effect in causing our understanding of where the particle is to be a little off as the very rapidly moving beam “sees” the particle less accurately than if it were moving slowly.
This means that we are left with a problem – we want to be both fast and accurate but in order to do so, we really want to be working at high magnification and with slowly acquired images.
We overcome this problem in AZtecFeature – Oxford Instruments’ particle analysis platform, using an approach called second pass imaging. The approach works by first looking at where particles are detected using a rapid scan (to maintain throughput). A particle need only be represented by 1 pixel in order for it to be detected – meaning that we can work at low magnification, also for the purposes of maintaining throughput. Once the location has been determined from this fast, first image, we then re-scan that location with another, much slower scan – and include a few extra pixels so that if the particle is in a different position when scanned slowly, we can find it. As we’re only scanning a few pixels very little time is used – even though we’re scanning slowly. This means that we now have the overall speed of the fast-scanning approach but the accuracy of slow scanning even when working at low magnification!
In this video, you can see this in action for a single GSR particle in the middle of a field. You can see that as the image the first image is acquired very quickly and then we see the particle make a small jump as the second image is acquired and replaces the first. The jump is smaller than the size of the particle (this particle is a couple of microns in size) but, if we were to perform EDS analysis using the location obtained from the first scan, we would sample a significant amount of the mounting medium and only part of the particle. This would, in the best case lead to the quantified composition of the particle including a large component from the mounting medium, but in the worst case – with particularly small particles – could result in the particle being missed and therefore, in terms of particle analysis, being mis-classified. Using second-pass imaging gets around these problems.
I’d recommend this approach to anyone looking at small particles at low magnification – it really is key to an accurate analysis.
Dr Matt Hiscock
Head of Product Science and Solutions
We send out monthly newsletters keeping you up to date with our latest developments such as webinars, new application notes and product updates.