Category Archives: Volume 7
The Importance of Iron Administration in Correcting Anaemia after Major Surgery
Introduction: Postoperative anaemia can affect more than 90% of patients undergoing major surgeries. Patients develop an absolute iron deficiency in the face of significant blood loss or preoperative anaemia and major surgery. Studies have shown the negative impact of these factors on transfusion requirements, infections, increased hospitalisation and long-term morbidities.
Aim of the study: The research was performed to determine the correlation between intravenous iron administration in the postoperative period and improved haemoglobin correction trend.
Material and methods: A prospective study was conducted to screen and treat iron deficiency in patients undergoing major surgery associated with significant bleeding. For iron deficiency anaemia screening, in the postoperative period, the following bioumoral parameters were assessed: haemoglobin, serum iron, transferrin saturation (TSAT), and ferritin, direct serum total iron-binding capacity (dTIBC), mean corpuscular volume (MCV) and mean corpuscular haemoglobin (MCH). In addition, serum glucose, fibrinogen, urea, creatinine and lactate values were also collected.
Results: Twenty-one patients undergoing major surgeries (52,38% were emergency and 47,61% elective interventions) were included in the study. Iron deficiency, as defined by ferritin 100-300 μg/L along with transferrin saturation (TSAT) < 20 %, mean corpuscular volume (MVC) < 92 fL, mean corpuscular haemoglobin (MCH) < 33 g/dL, serum iron < 10 μmol/L and direct serum total iron-binding capacity (dTIBC) > 36 μmol/L, was identified in all cases. To correct the deficit and optimise the haematological status, all patients received intravenous ferric carboxymaltose (500-1000 mg, single dose). Using Quadratic statistical analysis, the trend of haemoglobin correction was found to be a favourable one.
Conclusion: The administration of intravenous ferric carboxymaltose in the postoperative period showed the beneficial effect of this type of intervention on the haemoglobin correction trend in these groups of patients.
Analgosedation: The use of Fentanyl Compared to Hydromorphone
Background: The 2018 Society of Critical Care Medicine guidelines on the “Prevention and Management of Pain, Agitation/Sedation, Delirium, Immobility, and Sleep Disruption in Adult Patients in the ICU” advocate for protocol-based analgosedation practices. There are limited data available to guide which analgesic to use. This study compares outcomes in patients who received continuous infusions of fentanyl or hydromorphone as sedative agents in the intensive care setting.
Methods: This retrospective cohort study evaluated patients admitted into the medical intensive care unit, the surgical intensive care unit, and the cardiac intensive care unit from April 1, 2017, to August 1, 2018, who were placed on continuous analgesics. Patients were divided according to receipt of fentanyl or hydromorphone as a continuous infusion as a sedative agent. The primary endpoints were ICU length of stay and time on mechanical ventilation.
Results: A total of 177 patients were included in the study; 103 received fentanyl as a continuous infusion, and 74 received hydromorphone as a continuous infusion. Baseline characteristics were similar between groups. Patients in the hydromorphone group had deeper sedation targets. Median ICU length of stay was eight days in the fentanyl group compared to seven days in the hydromorphone group (p = 0.11) and median time on mechanical ventilation was 146.47 hours in the fentanyl group and 122.33 hours in the hydromorphone group (p = 0.31). There were no statistically significant differences in the primary endpoints of ICU length of stay and time on mechanical ventilation between fentanyl and hydromorphone for analgosedation purposes.
Conclusion: No statistically significant differences were found in the primary endpoints studied. Patients in the hydromorphone group required more tracheostomies, restraints, and were more likely to have a higher proportion of Critical Care Pain Observation Tool (CPOT) scores > 2.
Acute Kidney Injury Following Rhabdomyolysis in Critically Ill Patients
Introduction: Rhabdomyolysis, which resulted from the rapid breakdown of damaged skeletal muscle, potentially leads to acute kidney injury.
Aim: To determine the incidence and associated risk of kidney injury following rhabdomyolysis in critically ill patients.
Methods: All critically ill patients admitted from January 2016 to December 2017 were screened. A creatinine kinase level of > 5 times the upper limit of normal (> 1000 U/L) was defined as rhabdomyolysis, and kidney injury was determined based on the Kidney Disease Improving Global Outcome (KDIGO) score. In addition, trauma, prolonged surgery, sepsis, antipsychotic drugs, hyperthermia were included as risk factors for kidney injury.
Results: Out of 1620 admissions, 149 (9.2%) were identified as having rhabdomyolysis and 54 (36.2%) developed kidney injury. Acute kidney injury, by and large, was related to rhabdomyolysis followed a prolonged surgery (18.7%), sepsis (50.0%) or trauma (31.5%). The reduction in the creatinine kinase levels following hydration treatment was statistically significant in the non- kidney injury group (Z= -3.948, p<0.05) compared to the kidney injury group (Z= -0.623, p=0.534). Significantly, odds of developing acute kidney injury were 1.040 (p<0.001) for mean BW >50kg, 1.372(p<0.001) for SOFA Score >2, 5.333 (p<0.001) for sepsis and the multivariate regression analysis showed that SOFA scores >2 (p<0.001), BW >50kg (p=0.016) and sepsis (p<0.05) were independent risk factors. The overall mortality due to rhabdomyolysis was 15.4% (23/149), with significantly higher incidences of mortality in the kidney injury group (35.2%) vs the non- kidney injury (3.5%) [ p<0.001].
Conclusions: One-third of rhabdomyolysis patients developed acute kidney injury with a significantly high mortality rate. Sepsis was a prominent cause of acute kidney injury. Both sepsis and a SOFA score >2 were significant independent risk factors.
Evaluation of Sleep Architecture Using 24-hour Polysomnography in Patients Recovering from Critical Illness in an Intensive Care Unit and High Dependency Unit: A Longitudinal, Prospective, and Observational Study
Background and objective: The sleep architecture of critically ill patients being treated in Intensive care units and High dependency units is frequently unsettled and inadequate both qualitatively and quantitatively. The study aimed to investigate and elucidate factors influencing sleep architecture and quality in intensive care units (ICU) and high dependency units (HDU) in a limited resource setting with financial constraints, lacking human resources and technology for routine monitoring of noise, light and sleep promotion strategies in intensive care units (ICU).
Methods: The study was longitudinal, prospective, hospital-based, analytic, and observational. Insomnia Severity Index (ISI) and the Epworth sleepiness scale (ESS) pre hospitalisation scores were recorded. Patients underwent 24-hour polysomnography (PSG) with the simultaneous monitoring of noise and light in their environments. Patients stabilised in intensive care units (ICU) were transferred to high dependency units (HDU), where the 24-hour polysomnography with the simultaneous monitoring of noise and light in their environments was repeated. Following PSG, the Richards-Campbell Sleep Questionnaire (RCSQ) was employed to rate patients’ sleep in both the intensive care units (ICU) and high dependency units (HDU).
Results: Of 46 screened patients, 26 patients were treated in the intensive care unit (ICU) and then transferred to the high dependency units (HDU). The mean (SD) of the study population’s mean (SD) age was 35.96 (11.6) years with a predominantly male population (53.2% (n=14)). The mean (SD) of the ISI and ESS scores were 6.88 (2.58) and 4.92 (1.99), respectively. The comparative analysis of PSG data recording from the ICU and high dependency units (HDU) showed a statistically significant reduction in N1, N2 and an increase in N3 stages of sleep (p<0.05). Mean(SD) of RCSQ in the ICU and the HDU were 54.65(7.70) and 60.19(10.85) (p-value = 0.04) respectively. The disease severity (APACHE II) has a weak correlation with the arousal index but failed to reach statistical significance (coeff= 0.347, p= 0.083).
Conclusion: Sleep in ICU is disturbed and persisting during the recovery period in critically ill. However, during recovery, sleep architecture shows signs of restoration.
Impact of the Severity of Liver Injury in COVID-19 Patients Admitted to an Intensive Care Unit During the SARS-CoV2 Pandemic Outbreak
Introduction: The World Health Organization (WHO) identified a novel coronavirus, originating in Wuhan, China, in December 2019, as a pneumonia causing pathogen. Epidemiological data in Romania show more than 450.000 confirmed patients, with a constant number of approximately 10% admission in intensive care unit.
Method: A retrospective, observational study was conducted from 1st March to 30th October 2020, comprising 657 patients, confirmed as having COVID-19, and who had been admitted to the intensive care unit of the Mures County Clinical Hospital, Tîrgu Mures, Romania, which had been designated as a support hospital during the pandemic. Patients who presented at admission or developed abnormal liver function tests in the first seven days of admission, were included in the study; patients with pre-existing liver disease, were excluded.
Results: The mean (SD) age of patients included in the study was 59.41 (14.66) years with a male: female ratio of 1.51:1. Survivor status, defined as patients discharged from the intensive care unit, was significantly associated with parameters such as age, leukocyte count, albumin level, glycaemia level (p<0.05 for all parameters.)
Conclusions: Liver injury expressed through liver function tests cannot solely constitute a prognostic factor for COVID-19 patients, but its presence in critically ill patients should be further investigated and included in future guideline protocols.
Accuracy of Diagnostic Tests
Following the outbreak of the coronavirus disease 2019 (COVID-19) pandemic, design, development, validation, verification and implementation of diagnostic tests were actively addressed by a large number of diagnostic test manufacturers. This paper deals with the biases and sources of variation which influence the accuracy of diagnostic tests, including calculating and interpreting test characteristics, defining what is meant by test accuracy, understanding the basic study design for evaluating test accuracy, understanding the meaning of Sensitivity, Specificity, Positive Predictive Value and Negative Predictive Value, and evaluating them numerically, and the ROC curve (or Receiver Operating Characteristic ) and the Area under the Curve (AUC).
Critical Care Management of Decompensated Right Heart Failure in Pulmonary Arterial Hypertension Patients – An Ongoing Approach
Despite substantial advancements in diagnosis and specific medical therapy in pulmonary arterial hypertension patients’ management, this condition continues to represent a major cause of mortality worldwide. In pulmonary arterial hypertension, the continuous increase of pulmonary vascular resistance and rapid development of right heart failure determine a poor prognosis. Against targeted therapy, patients inexorable deteriorate over time. Pulmonary arterial hypertension patients with acute right heart failure who need intensive care unit admission present a complexity of the disease pathophysiology. Intensive care management challenges are multifaceted. Awareness of algorithms of right-sided heart failure monitoring in intensive care units, targeted pulmonary hypertension therapies, and recognition of precipitating factors, hemodynamic instability and progressive multisystem organ failure requires a multidisciplinary pulmonary hypertension team. This paper summarizes the management strategies of acute right-sided heart failure in pulmonary arterial hypertension adult cases based on recently available data.
The Use of Hydroxyurea in the Treatment of COVID-19
Introduction: The rapid worldwide spread of COVID-19 motivated medical professionals to pursue and authenticate appropriate remedies and treatment protocols. This article aims to analyze the potential benefits of one treatment protocol developed by a group of care providers caring for severe COVID-19 patients.
Methods: The clinical findings of COVID-19 patients who were transferred to a specialized care hospital after unsuccessful treatment in previous institutions, were analyzed. The specialized care hospital used a treatment protocol including hydroxyurea, a medication commonly used for sickle cell treatment, to improve respiratory distress in the COVID-19 patients. None of the COVID-19 patients included in the analyzed data were diagnosed with sickle cell, and none had previously taken hydroxyurea for any other conditions.
Results: In all presented cases, patients reverted to their baseline respiratory health after treatment with the hydroxyurea protocol. There was no significant difference in the correlation between COVID-19 and hydroxyurea. However, deaths were extremely low for those taking hydroxyurea.
Conclusions: Fatality numbers were extremely low for those taking hydroxyurea; death could be attributed to other underlying issues.
Critical Care Workers Have Lower Seroprevalence of SARS-CoV-2 IgG Compared with Non-patient Facing Staff in First Wave of COVID19
Introduction: In early 2020, at first surge of the coronavirus disease 2019 (COVID-19) pandemic, many health care workers (HCW) were re-deployed to critical care environments to support intensive care teams looking after patients with severe COVID-19. There was considerable anxiety of increased risk of COVID-19 for these staff. To determine whether critical care HCW were at increased risk of hospital acquired infection, we explored the relationship between workplace, patient facing role and evidence of immune exposure to the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) within a quaternary hospital providing a regional critical care response. Routine viral surveillance was not available at this time.
Methods: We screened over 500 HCW (25% of the total workforce) for history of clinical symptoms of possible COVID19, assigning a symptom severity score, and quantified SARS-CoV-2 serum antibodies as evidence of immune exposure to the virus.
Results: Whilst 45% of the cohort reported symptoms that they consider may have represented COVID-19, 14% had evidence of immune exposure. Staffs in patient facing critical care roles were least likely to be seropositive (9%) and staff working in non-patient facing roles most likely to be seropositive (22%). Anosmia and fever were the most discriminating symptoms for seropositive status. Older males presented with more severe symptoms. Of the 12 staff screened positive by nasal swab (10 symptomatic), 3 showed no evidence of seroconversion in convalescence.
Conclusions: Patient facing staff working in critical care do not appear to be at increased risk of hospital acquired infection however the risk of nosocomial infection from non-patient facing staff may be more significant than previous recognised. Most symptoms ascribed to possible COVID-19 were found to have no evidence of immune exposure however seroprevalence may underrepresent infection frequency. Older male staff were at the greatest risk of more severe symptoms.