Attention to Every Detail: Imaging Processing Projects at The RND Group
The medical industry has a wide array of applications for image acquisition and processing. While the most common types are X-ray, MRI, and ultrasound images, applications can benefit patients in lesser-known areas, as well. From early tooth decay detection, to cell identification using microscopy images, medical device image processing is a vital part of quality healthcare.
Medical Device Image Processing with The RND Group
The RND Group has been a part of several successful imaging projects over the years. In an effort to share a little more about our process, this article reviews some of the ways we work to make our clients’ medical devices as streamlined and effective as possible. Every image processing workflow is distinct in one way or another, but the following general steps apply to most applications:
You can help ensure that your project gets the attention it deserves by following this workflow in detail as described below. Carefully moving through each phase helps our team avoid errors as we work with clients to move their solution to market.
The first step is the most important. Your overall image processing result is entirely dependent on the quality of the input. The setup step is very broad, based on a wide variety of input quality conditions customers present to RND Group. Setup needs to include sample preparation, positioning, focus, and lighting. Consistency is the name of the game. The more variation the algorithm is required to handle, the harder the algorithm becomes to implement and fine tune.
Sample preparation is unique for each situation. Example preparation techniques include dilutions, chemical/biological reactions, mixing, and applying dyes or stains. Compared to manual methods for managing sample preparation, automation allows for a consistent process and better results.
Positioning the subject relative to the camera can be an automated task, or even a manual task with software assisting the user. Often, the positioning is an iterative task mixed with acquisition and analysis. This is also where we consider focus, which is only a special case of positioning. Because the distance from the subject and the lens will likely change, this is almost always iterative.
Lighting covers not only the brightness, but also the type of light, angle of incident, and filtering. In some cases, multiple images will be taken with changes in lighting. This allows for more complex analysis. The algorithm can use two images or more, wherever different features are present due to lighting. This might include visible light, ultraviolet light, florescence, or polarizing filters.
The RND Group has done extensive work in automation of image processing workflows for diagnostic equipment, including robotic positioning, liquid handling, and device control.
Acquisition in the digital world is comparable to clicking the shutter on an analog camera and developing the film. In a typical project, when using a traditional camera type of sensor, the shutter portion of acquisition is relatively trivial. The shutter can be triggered by software or it can be an external digital line controlled by the instrument firmware.
Development of the image corresponds to a transfer of the pixels from the camera to the software, typically via a driver provided by the camera manufacturer. The driver handles the communications with the camera and coverts raw pixels into red, green, and blue (RGB) values.
More complex acquisitions exist when working with discrete detectors. A discrete detector can be thought of as a single-pixel camera. In these scenarios, the software is responsible for repeatedly capturing pixels while the subject is moving in front of the detector. The software then reconstructs the image from the pixel stream as part of the acquisition.
In addition to capturing the detector output, the software also controls the movement of the subject. The software can be either part of the hardware (firmware) or part of the control software, or a combination of the two.
The RND Group has worked with all the techniques for acquisition: b/w camera drivers, color camera drivers, low definition, and high definition. We have worked with discrete sensor data both directly, and in conjunction with a firmware partner.
Once acquired, the image processing work begins enhancing the image. Traditionally, enhancement is applied to the entire image as a means of making the subjects stand out against the background. Enhancements include various filtering techniques, such as High Pass, Low Pass and Band Pass filters, Pyramid filtering, edge detection, and Fourier transforms (FFT/IFFT).
The enhancement may take a filtered version of the image and re-combine it with the original to strengthen specific features. It may also be used as a starting point to create a mask, which, in later steps, can greatly speed up processing by limiting the scope to areas of interest.
Segmentation allows an image to be broken into smaller regions to process. Given an image of a slide with cell colonies on it, segmentation may use an algorithm to isolate different colonies, or even individual cells, for further analysis on a per region basis. Segmentation uses complex processes based on the shape and characteristics of the item being isolated. Thresholding and edge detection combined with shape definitions and areas are used. A typical algorithm may have the following steps:
- Threshold to reduce the background noise
- Detect edges
- Increase the perimeter of the edges
- Identify connected pixels (blobs)
- Filter blobs by size – eliminate small and/or large blobs
- Filter blobs by aspect ratio – eliminate if not X:Y ratio
After the segmentation is complete, the image is reduced to regions of interest that can be used for more detailed analysis. The image uses for segmentation may have additional enhancements allowing for easier segmentation. Once the regions are found, they are typically applied to an image with a different set of enhancements.
For example, the original image may have a Gaussian blur applied to smooth out any noise prior to segmentation. However, once the regions of interest are found, they may be used to mask the original image so the fine details of the regions can be analyzed.
Quantification analyzes the regions of interest to correlate them to the value of interest. The quantification could be as simple as a count of the regions, as is the case for cell counting. Quantification can also be the starting point for a much more complex analysis. Some examples include:
- Ratio of value from total interior of region to perimeter exterior of region
- Ratio of peak value inside region to average of non-regions (background)
- Mean value of the peak value from every region
- Density of values
- Comparison from one image to another image across time
The quantified value is then used in a function to correlate with the desired measurement value in scientific units. As an example, think of imaging colonies in a petri dish. The area is measured for each colony in terms of pixels. The quantification step determines the size in micrometers of each pixel and the size of each bacteria being analyzed. The software can then quantify the growth in terms of cells per square millimeter using simple algebraic formulas.
The RND Group specializes in quantification algorithms. We have worked with numerous clients on developing and programming their data reduction algorithms. These algorithms include linear regressions, splines, sigmoid functions, RT-PCR, and normalization, just to name a few.
Last, but not least, we look at optimization. Actually, this step applies to the entire image processing workflow. It is equally crucial throughout the design lifecycle. Because images are large datasets, (and often close to 10 million data points), even a fast computer can quickly become bogged down when the data passes through multiple processing steps.
A typical project will prototype the algorithm (enhancement, segmentation, and quantification) using commercial toolsets or rapid development languages, such as MATLAB or Python/OpenCV. This allows the engineering team the flexibility to change optics, lighting, and cameras, and to also easily modify the algorithm to match.
Once the algorithm is stable, though, performance and memory use can still become a bottleneck with those tools. Significant performance improvements can be made by optimizing the algorithm and converting it to a programming language like C++.
The RND Group has excels at translating algorithms from scientific prototypes into performant production software. We recently took a MATLAB function and recoded it in C++ and OpenCV. The initial performance was 20 seconds per image. Once fully optimized, the algorithm runs in 0.4 seconds per image. In addition, during the optimization, two algorithm defects were discovered and corrected for an improvement in overall accuracy.
Image processing is used in a wide variety of medical devices, each with its own set of unique challenges. The RND Group has successfully worked in every aspect of image processing, and we have a deep specialization in medical instrument control and data reduction, including both image and non-imaged based products.
Your project deserves an expert team. Every year since 1997, The RND Group continues to work with the leading companies in the medical device industry and develop new products. We apply the rigor required to design, develop, documenting, and test products to meet the standards required by the FDA and other regulating bodies.
Ready to get started? Contact us today about your next success, and visit us online at www.RNDgroup.com.