Shi Bin, Patel Medhavi, Yu Dian, Yan Jihui, Li Zhengyu, Petriw David, Pruyn Thomas, Smyth Kelsey, Passeport Elodie, Miller R J Dwayne, Howe Jane Y
Department of Materials Science and Engineering, University of Toronto, ON M5S 3H5, Canada.
Department of Chemical Engineering and Applied Chemistry, University of Toronto, ON M5S 3E5, Canada.
Sci Total Environ. 2022 Jun 15;825:153903. doi: 10.1016/j.scitotenv.2022.153903. Epub 2022 Feb 19.
Microplastics quantification and classification are demanding jobs to monitor microplastic pollution and evaluate the potential health risks. In this paper, microplastics from daily supplies in diverse chemical compositions and shapes are imaged by scanning electron microscopy. It offers a greater depth and finer details of microplastics at a wider range of magnification than visible light microscopy or a digital camera, and permits further chemical composition analysis. However, it is labour-intensive to manually extract microplastics from micrographs, especially for small particles and thin fibres. A deep learning approach facilitates microplastics quantification and classification with a manually annotated dataset including 237 micrographs of microplastic particles (fragments or beads) in the range of 50 μm-1 mm and fibres with diameters around 10 μm. For microplastics quantification, two deep learning models (U-Net and MultiResUNet) were implemented for semantic segmentation. Both significantly outmatched conventional computer vision techniques and achieved a high average Jaccard index over 0.75. Especially, U-Net was combined with object-aware pixel embedding to perform instance segmentation on densely packed and tangled fibres for further quantification. For shape classification, a fine-tuned VGG16 neural network classifies microplastics based on their shapes with high accuracy of 98.33%. With trained models, it takes only seconds to segment and classify a new micrograph in high accuracy, which is remarkably cheaper and faster than manual labour. The growing datasets may benefit the identification and quantification of microplastics in environmental samples in future work.
微塑料的量化和分类是监测微塑料污染及评估潜在健康风险的艰巨任务。在本文中,通过扫描电子显微镜对来自不同化学成分和形状的日常用品中的微塑料进行成像。与可见光显微镜或数码相机相比,它在更宽的放大倍数范围内能提供更大的深度和更精细的微塑料细节,并允许进行进一步的化学成分分析。然而,从显微照片中手动提取微塑料是一项劳动密集型工作,尤其是对于小颗粒和细纤维。一种深度学习方法利用一个包含237张微塑料颗粒(碎片或珠子)显微照片(尺寸在50μm - 1mm范围内)以及直径约10μm的纤维的手动标注数据集,促进了微塑料的量化和分类。对于微塑料量化,实施了两种深度学习模型(U-Net和MultiResUNet)进行语义分割。两者均显著优于传统计算机视觉技术,平均Jaccard指数超过0.75。特别是,U-Net与目标感知像素嵌入相结合,对密集堆积和缠结的纤维进行实例分割以进行进一步量化。对于形状分类,一个微调的VGG16神经网络基于微塑料的形状进行分类,准确率高达98.33%。使用训练好的模型,只需几秒钟就能高精度地分割和分类新的显微照片,这比人工操作显著更便宜、更快。不断增加的数据集可能会在未来的工作中有助于环境样品中微塑料的识别和量化。