Background: Accurate measurement of serum creatinine (SCr) is critical in estimating glomerular filtration rate (eGFR) and classifying kidney function. This study evaluated the analytical differences between the enzymatic and Jaffe methods for SCr measurement and their impact on eGFR estimation using two widely applied equations: CKD-EPI and EKFC.
Methods: The study included 427 patients over 40 years old. SCr was measured using both enzymatic and Jaffe methods on the Alinity c platform. eGFR was calculated with the CKD-EPI (2009) and EKFC equations. Agreement between methods was assessed using Bland-Altman and Passing-Bablok regression. eGFR differences were analyzed using the Wilcoxon signed-rank test and multiple linear regression. Agreement in GFR category classification was evaluated using weighted kappa and Kendall’s tau.
Results: While the mean difference between methods was small, both systematic and proportional biases were statistically significant. eGFR values differed significantly between methods in both sexes (p < 0.01), regardless of the equation used. ΔeGFR was significantly associated with SCr values, but not with age. Although overall agreement in GFR categories was high (kappa > 0.91), method-dependent reclassification of patients was observed, which may influence CKD diagnosis and clinical decision-making.
Conclusions: Even minor analytical differences between enzymatic and Jaffe SCr measurements can lead to clinically relevant discrepancies in GFR categorization. These findings highlight the need for harmonization in laboratory methods to ensure consistent reporting and patient management.
Category Archives: Original Research
Prognosis prediction by urinary liver-type fatty acid-binding protein in patients in the intensive care unit admitted from the emergency department: A single-center, historical cohort study
Introduction: Early risk stratification of critically ill patients is essential for optimizing intensive care unit (ICU) resource allocation and treatment decisions. Urinary liver-type fatty acid-binding protein (L-FABP) is a simple, noninvasive biomarker that may provide real-time information on organ dysfunction. However, its prognostic utility in patients admitted to the ICU from the emergency department remains unclear.
Aim of the study: The aim of this study was to evaluate the prognostic value of L-FABP levels measured shortly after ICU admission in predicting 28-day mortality among patients admitted from the emergency department.
Methods: This single-center retrospective observational study included patients admitted to the ICU between December 2020 and August 2022. Urinary L-FABP concentrations were measured at ICU admission (T0) and 3 hours later (T3). The primary outcome was 28-day in-hospital mortality. Prognostic performance was assessed using receiver operating characteristic curves and Cox proportional hazards models with inverse probability of treatment weighting. Results were compared with Acute Physiology and Chronic Health Evaluation (APACHE) II, Sequential Organ Failure Assessment (SOFA) scores, and lactate levels.
Results: Data of 118 patients were included in the final analysis. Urinary L-FABP levels at T3 showed the highest AUC for predicting 28-day mortality (area under the curve [AUC] = 0.873), compared with APACHE II (AUC = 0.801), SOFA (AUC = 0.753), and the lactate level (AUC = 0.734). An elevated L-FABP (T3) level was independently associated with increased mortality (hazard ratio [HR] = 8.60, 95% confidence interval [CI]: 1.02–72.64, P = 0.047). The T3/T0 ratio showed only modest predictive value (AUC = 0.623).
Conclusions: Urinary L-FABP levels measured 3 hours after ICU admission were an independent predictor of short-term mortality. The marker’s simplicity and bedside applicability suggest its potential utility not only in ICUs but also in emergency departments and triage decision-making.
Predictive ability of malnutrition screening tools in enterally fed, mechanically ventilated patients with phase angle inference:A prospective observational study
Background: The prognostic abilities of malnutrition assessment tools for the critically ill are still controversial. This study aimed to assess the predictive ability of MUST, NRS-2002, and NUTRIC tools to predict malnutrition risk for enterally fed, mechanically ventilated patients in intensive care.
Methods: In a multicenter, prospective, observational study, patients from five ICU units in Jordan were observed at two stages. During the first 24 hours of admission, MUST, NUTRIC, and NRS-2002 scores were obtained in addition to the demographic and admission characteristics. In the assessment stage, on day 6th of admission forward, the Bioelectrical Impedance Analysis (BIA), including body compositions and Phase Angle (PhA) were assessed. Machine Learning (ML), structural equation modeling (SEM), and area under the curve (AUC) were used for measuring malnutrition estimates.
Results: A total of 709 patients were observed. At admission, NUTRIC, MUST, and NRS-2002 were congruent in identifying high malnutrition risk (45.1%, 46.4%, and 53.6%, respectively). In reference to PhA, MUST and NRS-2002 scored higher for sensitivity (77.6%, and 76.8%, respectively) and specificity (93.3%, and 90.7%, respectively). They reported acceptable correlation estimates by SEM (0.65, and 0.70, respectively), and ML (0.90, and 0.91, respectively). Further, MUST was the best to discriminate malnutrition, followed by NRS-2002 and NUTRIC (AUC= 0.76, 0.64, and 0.53, respectively).
Conclusions: Alongside validated BIA technology, MUST and NRS-2002 functioned as reliable prognostic indicators of malnutrition risk in the ICU.
Percutaneous tracheostomy in Costa Rican intensive care units: Multicenter epidemiology, practices, and immediate complications
Introduction: This study describes the epidemiological and clinical profile of ICU patients undergoing percutaneous tracheostomy in Costa Rica and identifies predictors of acute complications, addressing ongoing debates on timing, technique, and risk stratification.
Methods: We performed a prospective multicenter cohort study in eight CCSS hospitals (2019–2022), including adult ICU patients undergoing percutaneous tracheostomy. Demographic, clinical, and procedural data were collected, and multivariable logistic regression identified predictors of complications.
Results: A total of 516 patients were analyzed (mean age 53.2 ± 16.3 years; 68.2% male). The main indications were anticipated prolonged ventilation (32.4%), neurological deficits (26.7%), and ventilation >10 days (21.8%). The Ciaglia and Griggs techniques were used in 51.0% and 48.3% of cases, respectively. Capnography was applied in 74.2%, ultrasound in 17.7%, and bronchoscopy in 3.1%. First-pass success was achieved in 85.1%. Acute complications occurred in 28.3% of patients, predominantly minor bleeding (25.4%), while serious complications (airway loss, false passage, or bleeding requiring surgery) were rare (3.9%). No procedure-related deaths were observed. Independent predictors of complications included anticoagulation (OR 2.82), obesity (OR 2.10), coagulopathy (OR 2.29), prior neck surgery (OR 3.49), cervical immobilization (OR 4.68), and technical difficulty (OR 4.15 for any complication; OR 2.00 for serious complications). Airway management by physicians, compared with respiratory therapists, was also associated with higher risk (OR 1.52).
Conclusions: Percutaneous tracheostomy was feasible in multiple ICUs of the CCSS with complication rates comparable to international cohorts. Risk factors for complications included anticoagulation and prior neck surgery. Wider adoption of adjunctive monitoring tools and structured multidisciplinary training may further enhance procedural safety. These findings should be interpreted in the context of an observational design and a broad definition of complications.
Comparing length of stay between adult patients admitted to the intensive care unit with alcohol withdrawal syndrome treated with phenobarbital versus lorazepam
Objective: Review the use of phenobarbital and lorazepam to treat alcohol withdrawal syndrome (AWS) in the ICU, compare length of stay, examine medication use trends, implement provider training, and evaluate outcomes post-training.
Methods: Design – Retrospective observational study, Quality improvement. Setting – Tertiary care hospital with 36 ICU beds. Patients – Adults admitted to the ICU and placed on clinical institute withdrawal assessment (CIWA) protocol. Patients with epilepsy were excluded.
Results: During the 34-month baseline period, 713 patients were admitted to the ICU with alcohol use disorder (AUD) on CIWA, without epilepsy. 189 patients were treated with phenobarbital, 460 patients received only lorazepam, 64 patients received neither medication. All but 2 of the patients who received phenobarbital also received lorazepam. Compared to phenobarbital, lorazepam-only patients had shorter ICU LOS (p<0.001, 95% CI -2.36, -1.32) but higher mortality 13.91% vs. 4.76% (p=0.0008). We then developed and provided a training (which we refer to as an “intervention”) to all ICU providers encouraging consistent use of phenobarbital in the ICU when appropriate. During the 3-month post-intervention period 44 patients were admitted to the ICU with AUD on CIWA protocol. Of the 44 patients: 26 received phenobarbital, 12 received only lorazepam, 6 received neither medication. Significantly more patients were treated with only phenobarbital (57.69%)) compared to baseline (1.06%). Compared to patients treated with phenobarbital, patients treated with only lorazepam had a significantly higher mortality rate (33.33% vs. 7.69%, p=0.04).
Conclusions: We found significant variability in the use of phenobarbital and lorazepam for treatment of AWS in the ICU. After a quality improvement training for ICU physicians, there was less frequent concurrent use of benzodiazepines and barbiturates, no difference in ICU or hospital LOS, and significantly lower mortality rate for those treated with phenobarbital.
Influence of different short peripheral cannula materials on the incidence of phlebitis in intensive care units: A post-hoc analysis of the AMOR-VENUS study
Aim of the study: Short peripheral cannula (SPC)-related phlebitis occurs in 7.5% of critically ill patients, and mechanical irritation from cannula materials is a risk factor. Softer polyurethane cannulas reportedly reduce phlebitis, but the incidence of phlebitis may vary depending on the type of polyurethane. Differences in cannula stiffness may also affect the incidence of phlebitis; however, this relationship is not well understood. This study analyzed intensive care unit (ICU) patient data to compare the incidence of phlebitis across different cannula products, focusing on polyurethane.
Material and Methods: This is a post-hoc analysis of the AMOR-VENUS study that involved 23 ICUs in Japan. We included patients aged ≥ 18 years, who were admitted to the ICU with SPCs. The primary outcome was phlebitis, evaluated using hazard ratios (HRs) and 95% confidence intervals (CIs). Based on the market share and differences in synthesis, polyurethanes were categorized into PEU-Vialon® (BD, USA), SuperCath® (Medikit, Japan), and other polyurethanes; non-polyurethane materials were also analyzed. Multivariable marginal Cox regression analysis was performed using other polyurethanes as a reference.
Results: In total, 1,355 patients and 3,429 SPCs were evaluated. Among polyurethane cannulas, 1,087 (33.5%) were PEU-Vialon®, 702 (21.6%) were SuperCath®, and 276 (8.5%) were other polyurethanes. Among non-polyurethane cannulas, 1,292 (39.8%) were ethylene tetrafluoroethylene (ETFE) cannulas, and 72 (2.2%) used other materials. The highest incidence of phlebitis was observed with SuperCath® (13.1%). Multivariate analysis revealed an HR of 1.45 (95% CI 0.75-2.8, p = 0.21) for PEU-Vialon®, 2.60 (95% CI 1.35-5.00, p < 0.01) for SuperCath®, 2.29 (95% CI 1.19-4.42, p = 0.01) for ETFE, and 2.2 (95% CI 0.46-10.59, p = 0.32) for others.
Conclusions: The incidence of phlebitis varied among polyurethane cannulas. Further research is warranted to determine the causes of these differences.
Prediction of acute kidney injury in mechanically ventilated patients with COVID-19-related septic shock: An exploratory analysis of non-renal organ dysfunction markers
Background: Acute kidney injury (AKI) is a common and serious complication in critically ill patients with non-kidney organ dysfunction. Early prediction of AKI is crucial for timely intervention and improved outcomes. This study aimed to identify readily available non-renal predictors of AKI and to develop an exploratory prediction model in a specific cohort of critically ill patients with COVID-19-related septic shock requiring mechanical ventilation.
Materials and methods: This was a single-center, observational, retrospective cohort study conducted in the respiratory ICU of Hospital H+ Querétaro between April and December 2020. The study included 42 mechanically ventilated patients with septic shock secondary to SARS-CoV-2 infection and non-kidney organ dysfunction. AKI was defined using the KDIGO criteria. Trend analysis, bivariate and multivariate linear regression, were used to identify predictors of AKI and severe AKI.
Results: AKI occurred in 23 (54.8%) patients, with 6 (14.3%) developing severe AKI. Trend analysis revealed differences in norepinephrine dose, hemoglobin, and lactate trends between groups. A simplified logistic regression model, validated internally with bootstrapping to prevent overfitting, identified a protective trend associated with higher hemoglobin levels on admission. Quantitative analysis of a forecasting model for daily renal function showed moderate predictive accuracy.
Conclusions: This study identified several readily available non-kidney organ dysfunction variables that can predict AKI and its severity in critically ill patients with COVID-19-related septic shock. These findings may help in the early identification of at-risk patients and facilitate timely interventions to potentially improve outcomes. Further validation in larger and more diverse populations is warranted.
Efficacy of gabapentin versus trospium chloride for prevention of catheter-related bladder discomfort inside the surgical intensive care unit: A prospective, randomised, controlled clinical study
Introduction: Catheter-related bladder discomfort (CRBD) after perioperative catheterisation of the urinary bladder (COUB) is not uncommon.
Aim of the study: We evaluated the efficacy of both oral gabapentin and trospium in preventing CRBD during the early postoperative period in patients admitted to the surgical intensive care unit (S-ICU).
Material and Methods: 120 patients aged 20–65 years, ASA I, II or III who were admitted to S-ICU after undergoing elective spinal surgery (ESS) with COUB were included. They were randomly assigned to be administered either an oral 400 mg gabapentin capsule (Group G) or an oral 60 mg slow-release trospium chloride capsule (Group T) or nothing (Group C). The primary goal was the occurrence of CRBD and its severity at 1, 2, 6, 12, and 24 hours after the study drug administration (SDA).
Results: Group G and group T had a statistically significant lower incidence of CRBD than group C at 1, 2, 6, 12, and 24 hours after SDA, respectively. Both had considerably lower severity than group C in the first two hours only (P= 0.001 and 0.001, respectively). Group T had non-significantly lower incidence and severity of CRBD than group G. Group G had significantly lower mean total fentanyl requirements for up to 24 hours after SDA than group T and group C (P < 0.001).
Conclusion: Both oral gabapentin capsules and slow release trospium chloride capsules administered postoperatively, significantly decreased both the incidence of CRBD and its severity in the early postoperative period amongst S-ICU patients, without significant differences between the two drugs.
Positive fluid balance is associated with earlier acute kidney injury in COVID-19 patients
Introduction: Managing fluid balance in COVID-19 patients can be challenging, particularly if acute kidney injury (AKI) develops.
Aim of the study: We study the relationship between fluid net input and output (FNIO) in COVID-19 patients with development of AKI, time to development of AKI, in-hospital length of stay (LOS), and in-hospital mortality.
Material and Methods: Retrospective study of 403 patients with COVID-19. Data for FNIO were from day 1 through day 10 or earlier if AKI occurred.
Results: AKI occurred in 22.8%, in-hospital mortality occurred in 26.3%, mean days to AKI were 7.7 (SD=6.3), and mean LOS was 11.5 (SD=13.2) days. In the multivariate logistic regression analyses, increased FNIO mean was significantly associated with slightly increased odds for mortality (OR=1.001, 95% CI:1.0001, 1.0011, p=0.02) but was not significantly associated with AKI. In the multivariate linear regression analyses, increased FNIO mean was significantly associated with lesser days to AKI (B=-6.63*10-5, SE=<0.001, p=0.003) in the whole sample, greater days to AKI in the subset of those with ICU treatment (B=<0.001, SE=<0.001, p<0.001), while FNIO mean was not significantly associated with LOS.
Conclusions: Positive fluid balance was associated with faster onset of AKI and increased mortality. Fluid administration in patients with COVID-19 should be guided by routinely measuring FNIO. A restrictive fluid management regimen rather than usual care should be practiced.
Inhaled sevoflurane in critically ill COVID-19 patients: A retrospective cohort study
Background: Managing sedation in critically ill COVID-19 patients is challenging due to high sedative requirements and organ dysfunction that alters drug metabolism. Inhaled sevoflurane offers a lung-eliminated alternative that may mitigate drug accumulation.
Methods: This single-center, retrospective cohort study analyzed 43 mechanically ventilated COVID-19 patients (March–November 2020). Patients received inhaled sevoflurane adjunctive to IV sedation (n=30) or IV sedation alone (n=13). The primary outcome was the cumulative dose of IV sedatives over 7 days. Secondary outcomes included time to extubation and antipsychotic use.
Results: There was no significant difference in the cumulative dose of IV sedatives between groups. However, the sevoflurane group had a significantly longer median duration of mechanical ventilation (206 [IQR 144-356] vs 144 [IQR 115-156] hours, p=0.005) and a higher requirement for antipsychotic medication (66.6% vs 15.3%, OR 18.6, p=0.011). Daily doses of propofol were lower in the sevoflurane group on specific days, but overall burden was unchanged.
Conclusions: In this cohort, adjunctive inhaled sevoflurane did not significantly reduce the cumulative burden of IV sedatives and was associated with delayed extubation and increased antipsychotic use. While sevoflurane is a feasible alternative, these findings suggest caution regarding weaning and delirium management in COVID-19 patients.










