Background: Accurate measurement of serum creatinine (SCr) is critical in estimating glomerular filtration rate (eGFR) and classifying kidney function. This study evaluated the analytical differences between the enzymatic and Jaffe methods for SCr measurement and their impact on eGFR estimation using two widely applied equations: CKD-EPI and EKFC.
Methods: The study included 427 patients over 40 years old. SCr was measured using both enzymatic and Jaffe methods on the Alinity c platform. eGFR was calculated with the CKD-EPI (2009) and EKFC equations. Agreement between methods was assessed using Bland-Altman and Passing-Bablok regression. eGFR differences were analyzed using the Wilcoxon signed-rank test and multiple linear regression. Agreement in GFR category classification was evaluated using weighted kappa and Kendall’s tau.
Results: While the mean difference between methods was small, both systematic and proportional biases were statistically significant. eGFR values differed significantly between methods in both sexes (p < 0.01), regardless of the equation used. ΔeGFR was significantly associated with SCr values, but not with age. Although overall agreement in GFR categories was high (kappa > 0.91), method-dependent reclassification of patients was observed, which may influence CKD diagnosis and clinical decision-making.
Conclusions: Even minor analytical differences between enzymatic and Jaffe SCr measurements can lead to clinically relevant discrepancies in GFR categorization. These findings highlight the need for harmonization in laboratory methods to ensure consistent reporting and patient management.
Category Archives: Volume 12
Prognosis prediction by urinary liver-type fatty acid-binding protein in patients in the intensive care unit admitted from the emergency department: A single-center, historical cohort study
Introduction: Early risk stratification of critically ill patients is essential for optimizing intensive care unit (ICU) resource allocation and treatment decisions. Urinary liver-type fatty acid-binding protein (L-FABP) is a simple, noninvasive biomarker that may provide real-time information on organ dysfunction. However, its prognostic utility in patients admitted to the ICU from the emergency department remains unclear.
Aim of the study: The aim of this study was to evaluate the prognostic value of L-FABP levels measured shortly after ICU admission in predicting 28-day mortality among patients admitted from the emergency department.
Methods: This single-center retrospective observational study included patients admitted to the ICU between December 2020 and August 2022. Urinary L-FABP concentrations were measured at ICU admission (T0) and 3 hours later (T3). The primary outcome was 28-day in-hospital mortality. Prognostic performance was assessed using receiver operating characteristic curves and Cox proportional hazards models with inverse probability of treatment weighting. Results were compared with Acute Physiology and Chronic Health Evaluation (APACHE) II, Sequential Organ Failure Assessment (SOFA) scores, and lactate levels.
Results: Data of 118 patients were included in the final analysis. Urinary L-FABP levels at T3 showed the highest AUC for predicting 28-day mortality (area under the curve [AUC] = 0.873), compared with APACHE II (AUC = 0.801), SOFA (AUC = 0.753), and the lactate level (AUC = 0.734). An elevated L-FABP (T3) level was independently associated with increased mortality (hazard ratio [HR] = 8.60, 95% confidence interval [CI]: 1.02–72.64, P = 0.047). The T3/T0 ratio showed only modest predictive value (AUC = 0.623).
Conclusions: Urinary L-FABP levels measured 3 hours after ICU admission were an independent predictor of short-term mortality. The marker’s simplicity and bedside applicability suggest its potential utility not only in ICUs but also in emergency departments and triage decision-making.
Venoarterial extracorporeal membrane oxygenation as bridge support for refractory catecholamine-resistant shock and severe lactic acidosis in a patient with metformin exposure and multifactorial contributors: A case report
A 47-year-old male with type 2 diabetes on metformin and hypertension presented with profound hypoxemia, severe metabolic acidosis (pH unrecordable, lactate 17 mmol/L), and progressive cardiac dysfunction in the setting of presumed sepsis. Despite maximal conventional therapy—including mechanical ventilation, broad-spectrum antimicrobials, and high-dose vasopressors—the patient developed refractory shock and multi-organ dysfunction. Venoarterial extracorporeal membrane oxygenation (VA-ECMO) was initiated on hospital day 2 as hemodynamic bridge support, combined with continuous renal replacement therapy (CRRT). This intervention facilitated stabilization of hemodynamics, correction of acidosis, and improvement in organ function. The patient was successfully decannulated and survived to discharge, though with residual cardiomyopathy. Lactic acidosis in this case was likely multifactorial, with metformin exposure as one potential contributor amid acute kidney injury, hypoperfusion, and possible septic elements. This report describes the use of VA-ECMO as supportive therapy in a complex, refractory critical illness scenario, highlighting the importance of timely multidisciplinary escalation while emphasizing diagnostic challenges in attributing causality and the need for cautious patient selection in such high-risk interventions.
Predictive ability of malnutrition screening tools in enterally fed, mechanically ventilated patients with phase angle inference:A prospective observational study
Background: The prognostic abilities of malnutrition assessment tools for the critically ill are still controversial. This study aimed to assess the predictive ability of MUST, NRS-2002, and NUTRIC tools to predict malnutrition risk for enterally fed, mechanically ventilated patients in intensive care.
Methods: In a multicenter, prospective, observational study, patients from five ICU units in Jordan were observed at two stages. During the first 24 hours of admission, MUST, NUTRIC, and NRS-2002 scores were obtained in addition to the demographic and admission characteristics. In the assessment stage, on day 6th of admission forward, the Bioelectrical Impedance Analysis (BIA), including body compositions and Phase Angle (PhA) were assessed. Machine Learning (ML), structural equation modeling (SEM), and area under the curve (AUC) were used for measuring malnutrition estimates.
Results: A total of 709 patients were observed. At admission, NUTRIC, MUST, and NRS-2002 were congruent in identifying high malnutrition risk (45.1%, 46.4%, and 53.6%, respectively). In reference to PhA, MUST and NRS-2002 scored higher for sensitivity (77.6%, and 76.8%, respectively) and specificity (93.3%, and 90.7%, respectively). They reported acceptable correlation estimates by SEM (0.65, and 0.70, respectively), and ML (0.90, and 0.91, respectively). Further, MUST was the best to discriminate malnutrition, followed by NRS-2002 and NUTRIC (AUC= 0.76, 0.64, and 0.53, respectively).
Conclusions: Alongside validated BIA technology, MUST and NRS-2002 functioned as reliable prognostic indicators of malnutrition risk in the ICU.
High-fidelity simulation programs in ICU-related ethical non-technical skills training: A narrative review
Objective: Is there a place for non-technical skills training in the ICU? And what teaching strategy should we implement in this process? This narrative review analyzes the benefits of teaching ethics in the ICU environment by applying high-fidelity simulation scenarios to real-life situations, thereby improving communication, moral reasoning, self-reliance, cooperation, and perceptual skills.
Methods: In the literature, there are few publications on the training of ICU residents in non-technical skills and ethical dilemmas using high-fidelity simulations. After searching and scoping the database, we have identified 8 publications relevant to this narrative review.
Results: In the reviewed studies, the main topics discussed and rehearsed using simulations were as follows: communicating an adverse event during anesthesia in one study [5], delivering bad news in two studies [14,18], the ethics of end-of-life care, and the do-not-resuscitate order in three studies [5,18,19], and ethical non-technical skills such as communications, teamwork and confidence in emergent real-life situations in four studies [15,16,17,20].
Conclusions: Developing a more structured approach to teaching ethics-related events is important, particularly in critical care settings. All reviewed studies reached the same conclusion: high-fidelity simulation training is an educational strategy for ICU residents to develop a foundation in ethical considerations and moral reasoning by improving ethical non-technical skills, such as confidence, communication, teamwork, delivering bad news, and end-of-life care.
Percutaneous tracheostomy in Costa Rican intensive care units: Multicenter epidemiology, practices, and immediate complications
Introduction: This study describes the epidemiological and clinical profile of ICU patients undergoing percutaneous tracheostomy in Costa Rica and identifies predictors of acute complications, addressing ongoing debates on timing, technique, and risk stratification.
Methods: We performed a prospective multicenter cohort study in eight CCSS hospitals (2019–2022), including adult ICU patients undergoing percutaneous tracheostomy. Demographic, clinical, and procedural data were collected, and multivariable logistic regression identified predictors of complications.
Results: A total of 516 patients were analyzed (mean age 53.2 ± 16.3 years; 68.2% male). The main indications were anticipated prolonged ventilation (32.4%), neurological deficits (26.7%), and ventilation >10 days (21.8%). The Ciaglia and Griggs techniques were used in 51.0% and 48.3% of cases, respectively. Capnography was applied in 74.2%, ultrasound in 17.7%, and bronchoscopy in 3.1%. First-pass success was achieved in 85.1%. Acute complications occurred in 28.3% of patients, predominantly minor bleeding (25.4%), while serious complications (airway loss, false passage, or bleeding requiring surgery) were rare (3.9%). No procedure-related deaths were observed. Independent predictors of complications included anticoagulation (OR 2.82), obesity (OR 2.10), coagulopathy (OR 2.29), prior neck surgery (OR 3.49), cervical immobilization (OR 4.68), and technical difficulty (OR 4.15 for any complication; OR 2.00 for serious complications). Airway management by physicians, compared with respiratory therapists, was also associated with higher risk (OR 1.52).
Conclusions: Percutaneous tracheostomy was feasible in multiple ICUs of the CCSS with complication rates comparable to international cohorts. Risk factors for complications included anticoagulation and prior neck surgery. Wider adoption of adjunctive monitoring tools and structured multidisciplinary training may further enhance procedural safety. These findings should be interpreted in the context of an observational design and a broad definition of complications.
Comparing length of stay between adult patients admitted to the intensive care unit with alcohol withdrawal syndrome treated with phenobarbital versus lorazepam
Objective: Review the use of phenobarbital and lorazepam to treat alcohol withdrawal syndrome (AWS) in the ICU, compare length of stay, examine medication use trends, implement provider training, and evaluate outcomes post-training.
Methods: Design – Retrospective observational study, Quality improvement. Setting – Tertiary care hospital with 36 ICU beds. Patients – Adults admitted to the ICU and placed on clinical institute withdrawal assessment (CIWA) protocol. Patients with epilepsy were excluded.
Results: During the 34-month baseline period, 713 patients were admitted to the ICU with alcohol use disorder (AUD) on CIWA, without epilepsy. 189 patients were treated with phenobarbital, 460 patients received only lorazepam, 64 patients received neither medication. All but 2 of the patients who received phenobarbital also received lorazepam. Compared to phenobarbital, lorazepam-only patients had shorter ICU LOS (p<0.001, 95% CI -2.36, -1.32) but higher mortality 13.91% vs. 4.76% (p=0.0008). We then developed and provided a training (which we refer to as an “intervention”) to all ICU providers encouraging consistent use of phenobarbital in the ICU when appropriate. During the 3-month post-intervention period 44 patients were admitted to the ICU with AUD on CIWA protocol. Of the 44 patients: 26 received phenobarbital, 12 received only lorazepam, 6 received neither medication. Significantly more patients were treated with only phenobarbital (57.69%)) compared to baseline (1.06%). Compared to patients treated with phenobarbital, patients treated with only lorazepam had a significantly higher mortality rate (33.33% vs. 7.69%, p=0.04).
Conclusions: We found significant variability in the use of phenobarbital and lorazepam for treatment of AWS in the ICU. After a quality improvement training for ICU physicians, there was less frequent concurrent use of benzodiazepines and barbiturates, no difference in ICU or hospital LOS, and significantly lower mortality rate for those treated with phenobarbital.
Decisions, outcomes, and learning from what didn’t go wrong
We learn from outcomes, yet outcomes are unreliable teachers. In critical care, decisions are made with incomplete information, under time pressure, within systems that normalize workarounds, and where causality is opaque [1]. The feedback we receive later, whether a patient survives or dies or the extent of their recovery, reflects more than the decision itself. Physiology, system redundancies, timely intervention by a colleague, stochastic variance: all shape outcomes independently of our reasoning [2]. If we treat outcomes as verdicts on decision quality, we will systematically mislearn. [More]
Influence of different short peripheral cannula materials on the incidence of phlebitis in intensive care units: A post-hoc analysis of the AMOR-VENUS study
Aim of the study: Short peripheral cannula (SPC)-related phlebitis occurs in 7.5% of critically ill patients, and mechanical irritation from cannula materials is a risk factor. Softer polyurethane cannulas reportedly reduce phlebitis, but the incidence of phlebitis may vary depending on the type of polyurethane. Differences in cannula stiffness may also affect the incidence of phlebitis; however, this relationship is not well understood. This study analyzed intensive care unit (ICU) patient data to compare the incidence of phlebitis across different cannula products, focusing on polyurethane.
Material and Methods: This is a post-hoc analysis of the AMOR-VENUS study that involved 23 ICUs in Japan. We included patients aged ≥ 18 years, who were admitted to the ICU with SPCs. The primary outcome was phlebitis, evaluated using hazard ratios (HRs) and 95% confidence intervals (CIs). Based on the market share and differences in synthesis, polyurethanes were categorized into PEU-Vialon® (BD, USA), SuperCath® (Medikit, Japan), and other polyurethanes; non-polyurethane materials were also analyzed. Multivariable marginal Cox regression analysis was performed using other polyurethanes as a reference.
Results: In total, 1,355 patients and 3,429 SPCs were evaluated. Among polyurethane cannulas, 1,087 (33.5%) were PEU-Vialon®, 702 (21.6%) were SuperCath®, and 276 (8.5%) were other polyurethanes. Among non-polyurethane cannulas, 1,292 (39.8%) were ethylene tetrafluoroethylene (ETFE) cannulas, and 72 (2.2%) used other materials. The highest incidence of phlebitis was observed with SuperCath® (13.1%). Multivariate analysis revealed an HR of 1.45 (95% CI 0.75-2.8, p = 0.21) for PEU-Vialon®, 2.60 (95% CI 1.35-5.00, p < 0.01) for SuperCath®, 2.29 (95% CI 1.19-4.42, p = 0.01) for ETFE, and 2.2 (95% CI 0.46-10.59, p = 0.32) for others.
Conclusions: The incidence of phlebitis varied among polyurethane cannulas. Further research is warranted to determine the causes of these differences.
Prediction of acute kidney injury in mechanically ventilated patients with COVID-19-related septic shock: An exploratory analysis of non-renal organ dysfunction markers
Background: Acute kidney injury (AKI) is a common and serious complication in critically ill patients with non-kidney organ dysfunction. Early prediction of AKI is crucial for timely intervention and improved outcomes. This study aimed to identify readily available non-renal predictors of AKI and to develop an exploratory prediction model in a specific cohort of critically ill patients with COVID-19-related septic shock requiring mechanical ventilation.
Materials and methods: This was a single-center, observational, retrospective cohort study conducted in the respiratory ICU of Hospital H+ Querétaro between April and December 2020. The study included 42 mechanically ventilated patients with septic shock secondary to SARS-CoV-2 infection and non-kidney organ dysfunction. AKI was defined using the KDIGO criteria. Trend analysis, bivariate and multivariate linear regression, were used to identify predictors of AKI and severe AKI.
Results: AKI occurred in 23 (54.8%) patients, with 6 (14.3%) developing severe AKI. Trend analysis revealed differences in norepinephrine dose, hemoglobin, and lactate trends between groups. A simplified logistic regression model, validated internally with bootstrapping to prevent overfitting, identified a protective trend associated with higher hemoglobin levels on admission. Quantitative analysis of a forecasting model for daily renal function showed moderate predictive accuracy.
Conclusions: This study identified several readily available non-kidney organ dysfunction variables that can predict AKI and its severity in critically ill patients with COVID-19-related septic shock. These findings may help in the early identification of at-risk patients and facilitate timely interventions to potentially improve outcomes. Further validation in larger and more diverse populations is warranted.










