by Maanasa Ravikumar

A biophysicist armed with coding expertise, Richard de Mets is deputy facility manager at the Microscopy Core at the Mechanobiology Institute (MBI) in Singapore. If you’ve ever wondered what your confocal image would look like under a super-expensive, super-resolution microscope, or you need suitable image processing codes for your research, de Mets has the right tools for the job.

Previously, de Mets worked in the Virgile Viasnoff lab at MBI for nearly four years, engaging in a wide variety of research projects and enjoying collaboration with scientists from diverse backgrounds and areas of expertise. But he knew from his early lab days in France that he did not want to be a principal investigator. The thought of being “stuck in the office to write grants and forms,” de Mets says, didn’t match his gregarious personality. Instead, he seized upon the chance to move to the Microscopy Core at MBI in September 2019.

“I want to help researchers understand their projects and teach them how to use microscopes,” de Mets says, “so they can be fully independent and, of course, bring new knowledge wherever they are.” Although he had no prior experience working in a core facility, the job responsibilities matched his outgoing personality and agile research interests. 

Currently, de Mets and the rest of the team are working to develop new algorithms to help scientists with their image processing needs. Their first algorithm is currently under testing and uses a class of machine learning systems called GAN (Generative Adversarial Network). 

I was trying to develop something capable of image restoration and improving image quality at the same time,” de Mets says. “My idea was to use a GAN to be able to link a low signal-to-noise ratio (SNR) image to a high resolution, high SNR image.” In other words, the algorithm can predict how a confocal microscopy image might look if it were taken with a super-resolution microscope (see fig 1). This image processing approach is particularly suitable for thick samples that typically cannot be imaged by super-resolution microscopy and need to be protected from photobleaching. While a super-resolution microscope may be hard to come by for most scientists, de Mets’s algorithm may be an attractive alternative!

Together with his colleagues, de Mets is also developing and adapting a plethora of other image processing codes that utilize neural networks and deep learning approaches such as Unets, CARE, Noise2Void and Noise2Noise. One such code has been configured exclusively for classification and segmentation purposes, specifically to differentiate signals and reduce background noise in images (see fig 2). De Mets says the biggest challenges in developing such image analysis methods are understanding the nature of the scientific question, evaluating which microscopy and image processing approaches can help provide answers, and pinpointing how these approaches can be improved. He describes his team’s arsenal of image analysis tools as works in progress that he is open to sharing with interested researchers, to help test and develop them further.

I am happy for people to contact me if they want to know more about our work and to discuss how we can help their research,” de Mets says.

De Mets looks back at the past year fondly, stating that he has found an ideal role where he can follow his passion for developing resources that cater to a multitude of research interests rather than a single project. He advises those looking to transition professionally from the bench to a scientific platform to keep their research interests broad, acquire basic knowledge in the platform of interest, and gain collaborative experience. In addition to technical skills, de Mets emphasizes that developing soft skills, such as effective communication and teamwork, is very important. At the end of the day, building trust in your platform’s capabilities and forging partnerships are key to working on exciting science.

Richard de Mets can be reached at mbirdm@nus.edu.sg and https://www.linkedin.com/in/richard-de-mets-1b1806103/

Image details

Fig 1

Sample PFA-fixed monolayer of primary rat hepatocytes cultured on glass bottom dishes coated with fibronectin. Cells are stained with Phalloidin-ATTO565.

Microscopy and imaging setup 

Low SNR image: Spinning disk W1 unit at 60X magnification, Photometrics Prime 95b camera with low laser power and 30ms exposure time. 

High SNR image: Spinning disk W1 unit at 60X magnification, Photometrics Prime 95b camera with high laser power and 500ms exposure time. Roper Scientific Live-SR module engaged to improve the resolution.

The first set of images exemplifies the use of a variant of the Generative Adversarial Network (GAN), called pix2pix network, dedicated to study the link between paired images for the purpose of image restoration and improvement. Both high and low signal-to-noise ratio (SNR) images were taken using the same sample and microscope. Upon training, the network was able to scan the low SNR image and provide a super-resolution image prediction that is comparable to a high SNR image taken independently.

Fig 2

Sample PFA-fixed monolayer of primary rat hepatocytes cultured on glass bottom dishes coated with fibronectin. Cells are stained with Phalloidin-ATTO565.

Microscopy and imaging setup

Spinning disk W1 unit at 60X magnification, Photometrics Prime 95b camera. Image taken 3 microns above the coverslip in order to see lumen formation between two hepatocytes

The second set of images shows the use of an algorithm dedicated to classification and segmentation, based on the Unets convolutional neural network approach. The Z-stack images show lumen formation between adjacent cells. The goal of the algorithm is to be able to differentiate signals coming from a cell junction to those from the lumen, as this cannot be performed using classical thresholding methods with a single staining. To train the algorithm, de Mets first manually demarcated masks on ImageJ to differentiate nuclei (red), junctions (green) and lumen (blue) across 20 fields of view. Using these as targets, the network was able to predict masks with 94% accuracy after just 20 minutes of training. Moreover, while the network has been trained on 2D data, it can be used for 3D segmentation purposes too, for example to estimate nuclei volumes or lumen volume.


Deprecated: ltrim(): Passing null to parameter #1 ($string) of type string is deprecated in /home/u585406420/domains/imagenscience.com/public_html/child/wp-includes/wp-db.php on line 3030

Leave A Reply

Please enter your comment!
Please enter your name here