Introduction: The predictive potential of demographics, clinical characteristics, and inflammatory markers at admission to determine future intubation needs of hospitalised CoVID-19 patients is unknown. The study aimed to determine the predictive potential of elevated serum inflammatory markers in determining the need for intubation in CoVID-19 Patients.
Methods: In a retrospective cohort study of hospitalised SARS-CoV2 positive patients, single and multivariable regression analyses were used to determine covariate effects on intubation odds, and a minimax concave penalty regularised logistic regression was used to build a predictive model. A second prospective independent cohort tested the model.
Results: Systemic inflammatory markers obtained at admission were higher in patients that required subsequent intubation, and adjusted odds of intubation increased for every standard deviation above the mean for c-reactive protein (CRP) OR:2.8 (95% CI 1.8-4.5, p<0.001) and lactate dehydrogenase OR:2.1 (95% CI 1.3-3.3, p=0.002). A predictive model incorporating C-reactive protein, lactate dehydrogenase, and diabetes status at the time of admission predicted intubation status with an area under the curve (AUC) of 0.78 with corresponding sensitivity of 86%, specificity of 63%. This predictive model achieved an AUC of 0.83, 91% sensitivity, and 41% specificity on the validation cohort.
Conclusion: In patients hospitalised with CoVID-19, elevated serum inflammatory markers measured within the first twenty-four hours of admission are associated with an increased need for intubation. Additionally, a model of C-reactive protein, lactate dehydrogenase, and the presence of diabetes may play a predictive role in determining the future need for intubation.
Category Archives: Original Research
Survey of U.S. Critical Care Practitioners on Net Ultrafiltration Prescription and Practice Among Critically Ill Patients Receiving Kidney Replacement Therapy
Introduction: The current prescription and practice of net ultrafiltration among critically ill patients receiving kidney replacement therapy in the U.S. are unclear.
Aim of the study: To assess the attitudes of U.S. critical care practitioners on net ultrafiltration (UFNET) prescription and practice among critically ill patients with acute kidney injury treated with kidney replacement therapy.
Methods: A secondary analysis was conducted of a multinational survey of intensivists, nephrologists, advanced practice providers, and ICU and dialysis nurses practising in the U.S.
Results: Of 1,569 respondents, 465 (29.6%) practitioners were from the U.S. Mainly were nurses and advanced practice providers (58%) and intensivists (38.2%). The median duration of practice was 8.7 (IQR, 4.2-19.4) years. Practitioners reported using continuous kidney replacement therapy (as the first modality in 60% (IQR 20%-90%) for UFNET. It was found that there was a significant variation in assessment of prescribed-to-delivered dose of UFNET, use of continuous kidney replacement therapy for UFNET, methods used to achieve UFNET, and assessment of net fluid balance during continuous kidney replacement therapy. There was also variation in interventions performed for managing hemodynamic instability, perceived barriers to UFNET, belief that early and protocol-based fluid removal is beneficial, and willingness to enroll patients in a clinical trial.
Conclusions: There was considerable practice variation in UFNET among critical care practitioners in the U.S., reflecting the need to generate evidence-based practice guidelines for UFNET.
The Importance of Iron Administration in Correcting Anaemia after Major Surgery
Introduction: Postoperative anaemia can affect more than 90% of patients undergoing major surgeries. Patients develop an absolute iron deficiency in the face of significant blood loss or preoperative anaemia and major surgery. Studies have shown the negative impact of these factors on transfusion requirements, infections, increased hospitalisation and long-term morbidities.
Aim of the study: The research was performed to determine the correlation between intravenous iron administration in the postoperative period and improved haemoglobin correction trend.
Material and methods: A prospective study was conducted to screen and treat iron deficiency in patients undergoing major surgery associated with significant bleeding. For iron deficiency anaemia screening, in the postoperative period, the following bioumoral parameters were assessed: haemoglobin, serum iron, transferrin saturation (TSAT), and ferritin, direct serum total iron-binding capacity (dTIBC), mean corpuscular volume (MCV) and mean corpuscular haemoglobin (MCH). In addition, serum glucose, fibrinogen, urea, creatinine and lactate values were also collected.
Results: Twenty-one patients undergoing major surgeries (52,38% were emergency and 47,61% elective interventions) were included in the study. Iron deficiency, as defined by ferritin 100-300 μg/L along with transferrin saturation (TSAT) < 20 %, mean corpuscular volume (MVC) < 92 fL, mean corpuscular haemoglobin (MCH) < 33 g/dL, serum iron < 10 μmol/L and direct serum total iron-binding capacity (dTIBC) > 36 μmol/L, was identified in all cases. To correct the deficit and optimise the haematological status, all patients received intravenous ferric carboxymaltose (500-1000 mg, single dose). Using Quadratic statistical analysis, the trend of haemoglobin correction was found to be a favourable one.
Conclusion: The administration of intravenous ferric carboxymaltose in the postoperative period showed the beneficial effect of this type of intervention on the haemoglobin correction trend in these groups of patients.
Analgosedation: The use of Fentanyl Compared to Hydromorphone
Background: The 2018 Society of Critical Care Medicine guidelines on the “Prevention and Management of Pain, Agitation/Sedation, Delirium, Immobility, and Sleep Disruption in Adult Patients in the ICU” advocate for protocol-based analgosedation practices. There are limited data available to guide which analgesic to use. This study compares outcomes in patients who received continuous infusions of fentanyl or hydromorphone as sedative agents in the intensive care setting.
Methods: This retrospective cohort study evaluated patients admitted into the medical intensive care unit, the surgical intensive care unit, and the cardiac intensive care unit from April 1, 2017, to August 1, 2018, who were placed on continuous analgesics. Patients were divided according to receipt of fentanyl or hydromorphone as a continuous infusion as a sedative agent. The primary endpoints were ICU length of stay and time on mechanical ventilation.
Results: A total of 177 patients were included in the study; 103 received fentanyl as a continuous infusion, and 74 received hydromorphone as a continuous infusion. Baseline characteristics were similar between groups. Patients in the hydromorphone group had deeper sedation targets. Median ICU length of stay was eight days in the fentanyl group compared to seven days in the hydromorphone group (p = 0.11) and median time on mechanical ventilation was 146.47 hours in the fentanyl group and 122.33 hours in the hydromorphone group (p = 0.31). There were no statistically significant differences in the primary endpoints of ICU length of stay and time on mechanical ventilation between fentanyl and hydromorphone for analgosedation purposes.
Conclusion: No statistically significant differences were found in the primary endpoints studied. Patients in the hydromorphone group required more tracheostomies, restraints, and were more likely to have a higher proportion of Critical Care Pain Observation Tool (CPOT) scores > 2.
Acute Kidney Injury Following Rhabdomyolysis in Critically Ill Patients
Introduction: Rhabdomyolysis, which resulted from the rapid breakdown of damaged skeletal muscle, potentially leads to acute kidney injury.
Aim: To determine the incidence and associated risk of kidney injury following rhabdomyolysis in critically ill patients.
Methods: All critically ill patients admitted from January 2016 to December 2017 were screened. A creatinine kinase level of > 5 times the upper limit of normal (> 1000 U/L) was defined as rhabdomyolysis, and kidney injury was determined based on the Kidney Disease Improving Global Outcome (KDIGO) score. In addition, trauma, prolonged surgery, sepsis, antipsychotic drugs, hyperthermia were included as risk factors for kidney injury.
Results: Out of 1620 admissions, 149 (9.2%) were identified as having rhabdomyolysis and 54 (36.2%) developed kidney injury. Acute kidney injury, by and large, was related to rhabdomyolysis followed a prolonged surgery (18.7%), sepsis (50.0%) or trauma (31.5%). The reduction in the creatinine kinase levels following hydration treatment was statistically significant in the non- kidney injury group (Z= -3.948, p<0.05) compared to the kidney injury group (Z= -0.623, p=0.534). Significantly, odds of developing acute kidney injury were 1.040 (p<0.001) for mean BW >50kg, 1.372(p<0.001) for SOFA Score >2, 5.333 (p<0.001) for sepsis and the multivariate regression analysis showed that SOFA scores >2 (p<0.001), BW >50kg (p=0.016) and sepsis (p<0.05) were independent risk factors. The overall mortality due to rhabdomyolysis was 15.4% (23/149), with significantly higher incidences of mortality in the kidney injury group (35.2%) vs the non- kidney injury (3.5%) [ p<0.001].
Conclusions: One-third of rhabdomyolysis patients developed acute kidney injury with a significantly high mortality rate. Sepsis was a prominent cause of acute kidney injury. Both sepsis and a SOFA score >2 were significant independent risk factors.
Evaluation of Sleep Architecture Using 24-hour Polysomnography in Patients Recovering from Critical Illness in an Intensive Care Unit and High Dependency Unit: A Longitudinal, Prospective, and Observational Study
Background and objective: The sleep architecture of critically ill patients being treated in Intensive care units and High dependency units is frequently unsettled and inadequate both qualitatively and quantitatively. The study aimed to investigate and elucidate factors influencing sleep architecture and quality in intensive care units (ICU) and high dependency units (HDU) in a limited resource setting with financial constraints, lacking human resources and technology for routine monitoring of noise, light and sleep promotion strategies in intensive care units (ICU).
Methods: The study was longitudinal, prospective, hospital-based, analytic, and observational. Insomnia Severity Index (ISI) and the Epworth sleepiness scale (ESS) pre hospitalisation scores were recorded. Patients underwent 24-hour polysomnography (PSG) with the simultaneous monitoring of noise and light in their environments. Patients stabilised in intensive care units (ICU) were transferred to high dependency units (HDU), where the 24-hour polysomnography with the simultaneous monitoring of noise and light in their environments was repeated. Following PSG, the Richards-Campbell Sleep Questionnaire (RCSQ) was employed to rate patients’ sleep in both the intensive care units (ICU) and high dependency units (HDU).
Results: Of 46 screened patients, 26 patients were treated in the intensive care unit (ICU) and then transferred to the high dependency units (HDU). The mean (SD) of the study population’s mean (SD) age was 35.96 (11.6) years with a predominantly male population (53.2% (n=14)). The mean (SD) of the ISI and ESS scores were 6.88 (2.58) and 4.92 (1.99), respectively. The comparative analysis of PSG data recording from the ICU and high dependency units (HDU) showed a statistically significant reduction in N1, N2 and an increase in N3 stages of sleep (p<0.05). Mean(SD) of RCSQ in the ICU and the HDU were 54.65(7.70) and 60.19(10.85) (p-value = 0.04) respectively. The disease severity (APACHE II) has a weak correlation with the arousal index but failed to reach statistical significance (coeff= 0.347, p= 0.083).
Conclusion: Sleep in ICU is disturbed and persisting during the recovery period in critically ill. However, during recovery, sleep architecture shows signs of restoration.
Impact of the Severity of Liver Injury in COVID-19 Patients Admitted to an Intensive Care Unit During the SARS-CoV2 Pandemic Outbreak
Introduction: The World Health Organization (WHO) identified a novel coronavirus, originating in Wuhan, China, in December 2019, as a pneumonia causing pathogen. Epidemiological data in Romania show more than 450.000 confirmed patients, with a constant number of approximately 10% admission in intensive care unit.
Method: A retrospective, observational study was conducted from 1st March to 30th October 2020, comprising 657 patients, confirmed as having COVID-19, and who had been admitted to the intensive care unit of the Mures County Clinical Hospital, Tîrgu Mures, Romania, which had been designated as a support hospital during the pandemic. Patients who presented at admission or developed abnormal liver function tests in the first seven days of admission, were included in the study; patients with pre-existing liver disease, were excluded.
Results: The mean (SD) age of patients included in the study was 59.41 (14.66) years with a male: female ratio of 1.51:1. Survivor status, defined as patients discharged from the intensive care unit, was significantly associated with parameters such as age, leukocyte count, albumin level, glycaemia level (p<0.05 for all parameters.)
Conclusions: Liver injury expressed through liver function tests cannot solely constitute a prognostic factor for COVID-19 patients, but its presence in critically ill patients should be further investigated and included in future guideline protocols.
Critical Care Workers Have Lower Seroprevalence of SARS-CoV-2 IgG Compared with Non-patient Facing Staff in First Wave of COVID19
Introduction: In early 2020, at first surge of the coronavirus disease 2019 (COVID-19) pandemic, many health care workers (HCW) were re-deployed to critical care environments to support intensive care teams looking after patients with severe COVID-19. There was considerable anxiety of increased risk of COVID-19 for these staff. To determine whether critical care HCW were at increased risk of hospital acquired infection, we explored the relationship between workplace, patient facing role and evidence of immune exposure to the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) within a quaternary hospital providing a regional critical care response. Routine viral surveillance was not available at this time.
Methods: We screened over 500 HCW (25% of the total workforce) for history of clinical symptoms of possible COVID19, assigning a symptom severity score, and quantified SARS-CoV-2 serum antibodies as evidence of immune exposure to the virus.
Results: Whilst 45% of the cohort reported symptoms that they consider may have represented COVID-19, 14% had evidence of immune exposure. Staffs in patient facing critical care roles were least likely to be seropositive (9%) and staff working in non-patient facing roles most likely to be seropositive (22%). Anosmia and fever were the most discriminating symptoms for seropositive status. Older males presented with more severe symptoms. Of the 12 staff screened positive by nasal swab (10 symptomatic), 3 showed no evidence of seroconversion in convalescence.
Conclusions: Patient facing staff working in critical care do not appear to be at increased risk of hospital acquired infection however the risk of nosocomial infection from non-patient facing staff may be more significant than previous recognised. Most symptoms ascribed to possible COVID-19 were found to have no evidence of immune exposure however seroprevalence may underrepresent infection frequency. Older male staff were at the greatest risk of more severe symptoms.
Candida spp. in Lower Respiratory Tract Secretions – A Ten Years Retrospective Study
Introduction: Lower respiratory tract secretions (LRTS) like sputum and tracheal aspirates are frequently sent to the microbiology laboratory from patients with various respiratory pathologies. Improper collection techniques can lead to false-positive results, resulting in improper therapy.
Aim of the study: To determine the percentage of contaminated samples sent to the microbiology laboratory, to establish the prevalence of Candida spp. in non-contaminated samples and therefore, the presence of Candida spp. originating in lower respiratory tract infections.
Material and Methods: A 10-year data survey was conducted to assess the differences in Candida prevalence from contaminated versus non-contaminated samples, assessed and categorised by Bartlett grading system, and to emphasise the importance of quality control for potentially contaminated samples. The data were analysed according to gender, age, referring departments, and the species of Candida. For the statistical analysis, Kruskal-Wallis and Fisher tests were used, and the alpha value was set for 0.5.
Results: The prevalence of Candida spp. in all analysed samples was 31.60%. After excluding the contaminated samples, the actual prevalence was 27.66%. Of all sputum samples, 31.6% were contaminated. Patients aged more than 40 years old were more prone to provide contaminated sputum samples. C. albicans is more prevalent in non-contaminated sputum samples. In both sputum and tracheal aspirates, the chances of identifying a single species are higher than the chances of identifying multiple species.
Conclusions: The study emphasises the importance of assessing the quality of sputum samples because of the high number of improperly collected samples sent to the microbiology laboratory.
The Impact of Hyperoxia Treatment on Neurological Outcomes and Mortality in Moderate to Severe Traumatic Brain Injured patients
Background: Traumatic brain injury is a leading cause of morbidity and mortality worldwide. The relationship between hyperoxia and outcomes in patients with TBI remains controversial. We assessed the effect of persistent hyperoxia on the neurological outcomes and survival of critically ill patients with moderate-severe TBI.
Method: This was a retrospective cohort study of all adults with moderate-severe TBI admitted to the ICU between 1st January 2016 and 31st December 2019 and who required invasive mechanical ventilation. Arterial blood gas data was recorded within the first 3 hours of intubation and then after 6-12 hours and 24-48 hours. The patients were divided into two categories: Group I had a PaO2 < 120mmHg on at least two ABGs undertaken in the first twelve hours post intubation and Group II had a PaO2 ≥ 120mmHg on at least two ABGs in the same period. Multivariable logistic regression was performed to assess predictors of hospital mortality and good neurologic outcome (Glasgow outcome score ≥ 4).
Results: The study included 309 patients: 54.7% (n=169) in Group I and 45.3% (n=140) in Group II. Hyperoxia was not associated with increased mortality in the ICU (20.1% vs. 17.9%, p=0.62) or hospital (20.7% vs. 17.9%, p=0.53), moreover, the hospital discharge mean (SD) Glasgow Coma Scale (11.0(5.1) vs. 11.2(4.9), p=0.70) and mean (SD) Glasgow Outcome Score (3.1(1.3) vs. 3.1(1.2), p=0.47) were similar. In multivariable logistic regression analysis, persistent hyperoxia was not associated with increased mortality (adjusted odds ratio [aOR] 0.71, 95% CI 0.34-1.35, p=0.29). PaO2 within the first 3 hours was also not associated with mortality: 121-200mmHg: aOR 0.58, 95% CI 0.23-1.49, p=0.26; 201-300mmHg: aOR 0.66, 95% CI 0.27-1.59, p=0.35; 301-400mmHg: aOR 0.85, 95% CI 0.31-2.35, p=0.75 and >400mmHg: aOR 0.51, 95% CI 0.18-1.44, p=0.20; reference: PaO2 60-120mmHg within 3 hours. However, hyperoxia >400mmHg was associated with being less likely to have good neurological (GOS ≥4) outcome on hospital discharge (aOR 0.36, 95% CI 0.13-0.98, p=0.046; reference: PaO2 60-120mmHg within 3 hours.
Conclusion: In intubated patients with moderate-severe TBI, hyperoxia in the first 48 hours was not independently associated with hospital mortality. However, PaO2 >400mmHg may be associated with a worse neurological outcome on hospital discharge.