You are here:
Publication details
When Deep Learning Meets Cell Image Synthesis
Authors | |
---|---|
Year of publication | 2020 |
Type | Article in Periodical (without peer review) |
MU Faculty or unit | |
Citation | |
Description | Deep learning methods developed by the computer vision community are successfully being adapted for use in biomedical image analysis and synthesis applications with some delay. Also in cell image synthesis, we can observe significant improvements in the quality of generated results brought about by deep learning. The typical task is to generate isolated cell images based on training image examples with cropped, centered, and aligned individual cells. While the first trials to use generative adversarial networks (GANs) without any object detection or segmentation had limited capabilities, the recent article by Scalbert et al. 1 has shown that significant improvement can be obtained by splitting the task into (1) learning and generating object (cell and/or nuclei) shapes based on image segmentation, and (2) learning and generating the texture separately for each segment type including the background using so-called style transfer. |
Related projects: |