Kovatchev Boris P, Patek Stephen D, Ortiz Edward Andrew, Breton Marc D
1 University of Virginia Center for Diabetes Technology , Charlottesville, Virginia.
Diabetes Technol Ther. 2015 Mar;17(3):177-86. doi: 10.1089/dia.2014.0272. Epub 2014 Dec 1.
The level of continuous glucose monitoring (CGM) accuracy needed for insulin dosing using sensor values (i.e., the level of accuracy permitting non-adjunct CGM use) is a topic of ongoing debate. Assessment of this level in clinical experiments is virtually impossible because the magnitude of CGM errors cannot be manipulated and related prospectively to clinical outcomes.
A combination of archival data (parallel CGM, insulin pump, self-monitoring of blood glucose [SMBG] records, and meals for 56 pump users with type 1 diabetes) and in silico experiments was used to "replay" real-life treatment scenarios and relate sensor error to glycemic outcomes. Nominal blood glucose (BG) traces were extracted using a mathematical model, yielding 2,082 BG segments each initiated by insulin bolus and confirmed by SMBG. These segments were replayed at seven sensor accuracy levels (mean absolute relative differences [MARDs] of 3-22%) testing six scenarios: insulin dosing using sensor values, threshold, and predictive alarms, each without or with considering CGM trend arrows.
In all six scenarios, the occurrence of hypoglycemia (frequency of BG levels ≤50 mg/dL and BG levels ≤39 mg/dL) increased with sensor error, displaying an abrupt slope change at MARD =10%. Similarly, hyperglycemia (frequency of BG levels ≥250 mg/dL and BG levels ≥400 mg/dL) increased and displayed an abrupt slope change at MARD=10%. When added to insulin dosing decisions, information from CGM trend arrows, threshold, and predictive alarms resulted in improvement in average glycemia by 1.86, 8.17, and 8.88 mg/dL, respectively.
Using CGM for insulin dosing decisions is feasible below a certain level of sensor error, estimated in silico at MARD=10%. In our experiments, further accuracy improvement did not contribute substantively to better glycemic outcomes.
使用传感器值进行胰岛素给药所需的连续血糖监测(CGM)准确性水平(即允许非辅助使用CGM的准确性水平)是一个仍在争论的话题。在临床试验中评估这一水平几乎是不可能的,因为CGM误差的大小无法控制,也无法前瞻性地与临床结果相关联。
结合存档数据(56名1型糖尿病胰岛素泵使用者的并行CGM、胰岛素泵、自我血糖监测[SMBG]记录和饮食数据)和计算机模拟实验,“重现”现实生活中的治疗场景,并将传感器误差与血糖结果相关联。使用数学模型提取名义血糖(BG)轨迹,得到2082个BG片段,每个片段均由胰岛素推注启动并经SMBG确认。这些片段在七个传感器准确性水平(平均绝对相对差异[MARD]为3%-22%)下进行重现,测试六种场景:使用传感器值进行胰岛素给药、阈值和预测警报,每种场景均不考虑或考虑CGM趋势箭头。
在所有六种场景中,低血糖(BG水平≤50mg/dL和BG水平≤39mg/dL的频率)的发生率随传感器误差增加,在MARD = 10%时显示出斜率突变。同样,高血糖(BG水平≥250mg/dL和BG水平≥400mg/dL的频率)增加,并在MARD = 10%时显示出斜率突变。当将CGM趋势箭头、阈值和预测警报的信息添加到胰岛素给药决策中时,平均血糖分别改善了1.86、8.17和8.88mg/dL。
在计算机模拟中估计,在传感器误差低于MARD = 10%的特定水平时,使用CGM进行胰岛素给药决策是可行的。在我们的实验中,进一步提高准确性对改善血糖结果没有实质性贡献。