Suppr超能文献

一种通过卷积神经网络管理重症监护病房医用耗材的深度学习方法:技术概念验证研究。

A Deep Learning Approach for Managing Medical Consumable Materials in Intensive Care Units via Convolutional Neural Networks: Technical Proof-of-Concept Study.

作者信息

Peine Arne, Hallawa Ahmed, Schöffski Oliver, Dartmann Guido, Fazlic Lejla Begic, Schmeink Anke, Marx Gernot, Martin Lukas

机构信息

Department of Intensive Care Medicine and Intermediate Care, University Hospital Rheinisch-Westfälische Technische Hochschule Aachen, Aachen, Germany.

Clinomic GmbH, Aachen, Germany.

出版信息

JMIR Med Inform. 2019 Oct 10;7(4):e14806. doi: 10.2196/14806.

Abstract

BACKGROUND

High numbers of consumable medical materials (eg, sterile needles and swabs) are used during the daily routine of intensive care units (ICUs) worldwide. Although medical consumables largely contribute to total ICU hospital expenditure, many hospitals do not track the individual use of materials. Current tracking solutions meeting the specific requirements of the medical environment, like barcodes or radio frequency identification, require specialized material preparation and high infrastructure investment. This impedes the accurate prediction of consumption, leads to high storage maintenance costs caused by large inventories, and hinders scientific work due to inaccurate documentation. Thus, new cost-effective and contactless methods for object detection are urgently needed.

OBJECTIVE

The goal of this work was to develop and evaluate a contactless visual recognition system for tracking medical consumable materials in ICUs using a deep learning approach on a distributed client-server architecture.

METHODS

We developed Consumabot, a novel client-server optical recognition system for medical consumables, based on the convolutional neural network model MobileNet implemented in Tensorflow. The software was designed to run on single-board computer platforms as a detection unit. The system was trained to recognize 20 different materials in the ICU, while 100 sample images of each consumable material were provided. We assessed the top-1 recognition rates in the context of different real-world ICU settings: materials presented to the system without visual obstruction, 50% covered materials, and scenarios of multiple items. We further performed an analysis of variance with repeated measures to quantify the effect of adverse real-world circumstances.

RESULTS

Consumabot reached a >99% reliability of recognition after about 60 steps of training and 150 steps of validation. A desirable low cross entropy of <0.03 was reached for the training set after about 100 iteration steps and after 170 steps for the validation set. The system showed a high top-1 mean recognition accuracy in a real-world scenario of 0.85 (SD 0.11) for objects presented to the system without visual obstruction. Recognition accuracy was lower, but still acceptable, in scenarios where the objects were 50% covered (P<.001; mean recognition accuracy 0.71; SD 0.13) or multiple objects of the target group were present (P=.01; mean recognition accuracy 0.78; SD 0.11), compared to a nonobstructed view. The approach met the criteria of absence of explicit labeling (eg, barcodes, radio frequency labeling) while maintaining a high standard for quality and hygiene with minimal consumption of resources (eg, cost, time, training, and computational power).

CONCLUSIONS

Using a convolutional neural network architecture, Consumabot consistently achieved good results in the classification of consumables and thus is a feasible way to recognize and register medical consumables directly to a hospital's electronic health record. The system shows limitations when the materials are partially covered, therefore identifying characteristics of the consumables are not presented to the system. Further development of the assessment in different medical circumstances is needed.

摘要

背景

在全球重症监护病房(ICU)的日常工作中,会使用大量一次性医疗材料(如无菌针头和棉签)。尽管医疗耗材在ICU医院总支出中占很大比例,但许多医院并未对材料的个体使用情况进行跟踪。当前满足医疗环境特定要求的跟踪解决方案,如条形码或射频识别,需要专门的材料准备和高额的基础设施投资。这妨碍了对耗材消耗的准确预测,导致因大量库存而产生高昂的存储维护成本,并且由于记录不准确而阻碍了科研工作。因此,迫切需要新的数据高效且非接触式的物体检测方法。

目的

本研究旨在开发并评估一种非接触式视觉识别系统,该系统采用深度学习方法,基于分布式客户端-服务器架构对ICU中的医疗耗材进行跟踪。

方法

我们开发了Consumabot,这是一种用于医疗耗材的新型客户端-服务器光学识别系统,基于在TensorFlow中实现的卷积神经网络模型MobileNet。该软件设计为在单板计算机平台上作为检测单元运行。该系统经过训练,可识别ICU中的20种不同材料,每种耗材提供100张样本图像。我们在不同的真实ICU环境中评估了top-1识别率:系统无视觉障碍情况下呈现的材料、50%被覆盖的材料以及多个物品的场景。我们还进行了重复测量方差分析,以量化不利现实情况的影响。

结果

经过约60步训练和150步验证后,Consumabot的识别可靠性达到了>99%。在约100次迭代步骤后,训练集达到了<0.03的理想低交叉熵,验证集在170步后达到该值。在无视觉障碍的真实场景中呈现给系统的物体,该系统显示出较高的top-1平均识别准确率,为0.85(标准差0.11)。在物体50%被覆盖(P<0.001;平均识别准确率0.71;标准差0.13)或目标组存在多个物体的场景(P=0.01;平均识别准确率0.78;标准差0.11)中,与无障碍视图相比,识别准确率较低,但仍可接受。该方法符合无明确标签(如条形码、射频标签)的标准,同时以最少的资源消耗(如成本、时间、训练和计算能力)维持了高标准的质量和卫生。

结论

使用卷积神经网络架构,Consumabot在耗材分类方面始终取得良好结果,因此是一种将医疗耗材直接识别并登记到医院电子健康记录中的可行方法。当材料部分被覆盖时,该系统显示出局限性,因此耗材的识别特征未呈现给系统。需要在不同医疗环境中进一步开展评估研究。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/864b/6819012/0a48bab732c0/medinform_v7i4e14806_fig1.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验