Bulletin of the World Health Organization

Maternal and neonatal services in Ethiopia: measuring and improving quality

Maureen E Canavan a, Marie A Brault a, Dawit Tatek a, Daniel Burssa b, Ayele Teshome b, Erika Linnander a & Elizabeth H Bradley a

a. Global Health Leadership Institute, Yale University School of Public Health, 2 Church Street South, New Haven, Connecticut, 06519, United States of America.
b. Federal Ministry of Health, Addis Ababa, Ethiopia.

Correspondence to Maureen Canavan (email: Maureen.Canavan@yale.edu).

(Submitted: 30 May 2016 – Revised version received: 06 December 2016 – Accepted: 30 January 2017 – Published online: 25 April 2017.)

Bulletin of the World Health Organization 2017;95:473-477. doi: http://dx.doi.org/10.2471/BLT.16.178806

Introduction

Maternal mortality remains high in most low-and middle-income countries, and poor quality of intrapartum care limits further progress.1,2 Pregnancy- or childbirth-related complications lead to more than 380 preventable deaths of women per day in these countries.3 Researchers have underscored the importance of improving the quality of labour and delivery care for continued reduction in preventable maternal and neonatal deaths.4,5

Reliable and valid evaluation of quality of intrapartum care is paramount for addressing high maternal and neonatal mortality rates in low- and middle-income countries; yet progress has been slow. Recent efforts to develop and validate instruments for use in sub-Saharan Africa are encouraging,6,7 but experience integrating such tools into routine practice is limited. Studies of the quality of maternal and neonatal care in low- and middle-income countries largely focus on evaluating the quality of a single intervention6,8 rather than multiple aspects of intrapartum care, and often in one hospital at a single point in time, thus limiting the generalizability of the findings. Furthermore, few studies have reported on quality measurement efforts embedded in broader national and international quality improvement strategies; this limits their impact globally.

We sought to develop and test a method of measuring the quality of maternal and neonatal care that could be embedded in a larger national performance management initiative. We undertook the study in Ethiopia, where facility-based births are increasing,9 and where the country has invested in hospital quality improvement since 2003.10,11 Although we were unable to measure patient outcomes, we examined both processes of care and the capacity of facilities to provide quality-of-care processes, with a focus on elements of intrapartum care.

Local setting

In 2014, as part of national hospital reform efforts, the Ethiopian Federal Ministry of Health identified the quality of hospital-based labour and delivery care as key priorities for improvement. The medical services directorate of the health ministry convened a committee of ministry staff and international partners to create a tool and a protocol for conducting site visits and collecting data on intrapartum care quality. We piloted and evaluated the tool and assessment process in the 18 hospitals within the Ethiopia Alliance for Hospital Quality.10 The alliance is a national quality collaborative initiative, in which the approximately 140 government hospitals are each assigned to a cluster with one lead hospital to work on prioritized hospital quality issues. Lead hospital status is therefore an indicator of generally better performance and special responsibility to promote and coordinate quality improvement efforts within a hospital cluster. In the first 18-month cycle, the priority quality target was adherence to the national guidelines for reforming hospital management.12 The second 18-month cycle focused on improving patient satisfaction, and the third cycle focused on improving hospital labour and delivery care.

Components of the hospital performance management framework in Ethiopia have previously been described in detail.1013 The assessment tool and process of evaluation was integrated into existing mechanisms within the 18 lead hospitals, with the potential for scale-up to all government hospitals. Here, we report the experience of implementing the tool in these hospitals.

Approach

A longitudinal study was conducted in all the 18 lead hospitals. They represented hospitals from the five most populous regions of Ethiopia. Baseline data from 2015 showed that on average (mean values), these hospitals had 136 inpatient beds (standard deviation, SD: 71 beds), provided 113 937 (SD: 68 431) outpatient visits and 7091 (SD: 4492) inpatient admissions annually, and had 2893 (SD: 1835) deliveries. Hospitals were staffed by an average of 24 (SD: 11) physicians (including 3 surgeons) and 131 (SD: 42) nurses.

We developed the assessment tool (available from the corresponding author) from the Ethiopian National maternal and newborn care service guidelines (Burssa D, Ethiopian Ministry of Health, unpublished data, February 2015) and the World Health Organization’s Service availability and readiness assessment tool.14 The tool focused on areas of anticipated gaps and health ministry priorities, based on previous performance indicator reports from key national hospitals. It included a total of 110 items within nine domains. In four domains (antenatal care facility assessment; emergency obstetric care services; caesarean section care; and case management of postpartum haemorrhage and eclampsia) researchers checked for the presence or absence of specific items in the medical records (e.g. medication prescriptions, nursing care plans, progress notes and discharge notes). In the remaining five domains (basic infrastructure; paediatric care; laboratory services; accessibility of guidelines and auditing efforts; and infection protection and patient safety), researchers completed the instrument using data from direct observations (e.g. for the presence or absence of infrastructure elements, and services and tools in emergency obstetric wards).

Data were collected by staff from the medical services directorate of the Ethiopian Federal Ministry of Health and regional health bureaus, the Clinton Health Access Initiative (Addis Abba, Ethiopia) and Yale University (New Haven, United States of America). The data collectors were trained to conduct observations and medical record reviews by trainers selected by the health ministry. The training was conducted in Addis Ababa over a period of 1 week.

Two or three data collectors made a 3-day site visit to each lead hospital. At visits, one researcher focused on the review of medical records (requiring about 1.5 days per hospital), while at least two other researchers conducted observations, which lasted 1.5 days, for a total sample of 24 days of observation in the 18 hospitals. In each hospital, 19 medical records were chosen using systematic random sampling from the last 12 months of birth records for a total sample of 342 records.

Baseline data were collected in June and July 2015 and follow-up data were collected in February and March 2016. We calculated overall quality scores (items met divided by total items in the instrument) as well as domain-specific scores for the different subgroups of services, at baseline and follow-up within each domain. We computed paired t-tests to determine the statistical significance of changes in the overall and domain-specific mean quality scores.

Between baseline and follow-up, the Ethiopia Alliance for Hospital Quality provided training on labour and delivery services for staff at the lead hospitals. Most of the training took place on site and included: refresher training on comprehensive emergency obstetric care for physicians and midwives; maternity services auditing (a new tool developed by the central team); management, analysis and use of data by quality improvements teams, hospital senior management and department heads; medical equipment management for biomedical staff; and customer service training for staff from all disciplines. The trainers were experts selected from the health ministry, Yale University and other partners and government agencies. The average training period was 3 days.

Relevant changes

At baseline the overall mean quality score across the 18 hospitals was 65.6 (SD: 10.5) out of a possible 110 (Table 1). We found a significant improvement in quality scores from baseline to follow-up in eight of the nine domains (P < 0.05 to < 0.001). The overall summary score increased to 91.2 (SD: 12.4) out of 110 after the training intervention (P < 0.001; Table 1).

Lessons learnt

We found that the measurement method (direct observation and medical record reviews) was generally successful, requiring a total of 3 days and two or three trained surveyors per hospital visit (Box 1). The process produced data sensitive enough to detect changes made during less than a year of quality improvement efforts. With 110 items relevant to processes of intrapartum care and facility capacity to implement such processes, the instrument provided a feasible approach to identify gaps and opportunities for improvement. We documented statistically significant improvements in almost all domains of quality at these 18 hospitals. The findings are encouraging for future quality measurement efforts in low- and middle-income countries, although the instrument would benefit from additional testing and validation. The tool and data described here represent an approach that has been embedded in Ethiopia’s hospital performance management initiative and the Ethiopia Alliance for Hospital Quality, which may enhance the sustained focus on quality of maternal and neonatal care.

Box 1. Summary of main lessons learnt

  • The tool and process for assessing quality of intrapartum care in hospitals in Ethiopia was feasible to implement, requiring 3-day site visits by two or three data collectors.
  • The process produced data sensitive enough to detect significant changes made during less than 1 year of national quality improvement efforts by the Ethiopia Hospital Alliance for Quality.
  • Informed by the World Health Organization’s service availability and readiness assessment instrument, the tool provided a feasible approach to identify gaps and opportunities for improvements in quality.

The process met several key challenges. First, due to constraints of staffing, cost and time, we were unable to measure patient outcomes and had to depend on a combination of direct observation and chart reviews to obtain data. Often charts were incomplete, which was noted; more complete medical records would facilitate more precise quality measurement efforts. Second, the process required substantial financial and time investments and we could only accomplish the site visits for 18 hospitals. However, we found that having lead hospitals demonstrate the feasibility of the instrument helped promote its wider acceptance, as other hospitals seek to achieve the national recognition given to the lead hospitals. Third, clinical observations may have overestimated the use of quality processes, given that people may alter their behaviour when they know they are being observed. Additionally, it is possible that our chart reviews may have overestimated the use of quality processes, if incomplete records were more common when quality of care was worse. Our measure was therefore interpreted as an upper limit on quality processes. Nevertheless, the methods still produced useful data on changes over time. Last, we learnt that embedding such measurement in national hospital performance management efforts can help raise the visibility of such efforts, sustain needed resources for data collection and analysis, and prompt hospitals to make changes. The linking of performance, as measured by this process, to financial rewards for top performing and most improved hospitals was reported to have motivated hospitals to understand and improve their quality data. The process also helped to highlight clear gaps on which to focus future national quality improvement efforts by the medical services directorate and health ministry.


Competing interests:

None declared.

References