FAQs

Home » Metric » FAQ
Share

FAQs

The selection of sites, and the number of sites, providers, and clients included in a survey, is dependent upon the research question, and context. The resources (money, time, personnel, etc.) available to conduct the survey in multiple sites also plays a role in the sample size. The following general principles are offered to reduce bias and increase confidence in the results. They are provided in the context of monitoring the quality of abortion care within a system with more than one service type (such as facility and pharmacy). If a comparison is going to be made between service types, or between geographic areas, then additional considerations are required.

 

  • The monitoring study should ideally begin with a sampling frame – a listing of all possible service sites.
  • If it is not possible to visit all sites, then sites should be selected randomly, within the sampling frame. If client volume is available before the study is conducted, this information can be used to select sites proportionate to client volume.
  • If sites are selected proportionate to client volume, low volume sites are less likely to be selected; however, interviewing sufficient clients per site will be more efficient.
  • For best use of resources, first consider the margin of error you are able to tolerate for indicators collected via the client exit interviews. Eighteen of the ACQ’s 29 total indicators are based on client exit interview responses.
  • Often, for monitoring studies, users choose a 95% confidence level, and a margin of error of 0.05 – 0.1. The most conservative estimate considers that only 50% of the population meet the indicator.
  • Select clients randomly to reduce bias. Clients should be selected at varying times of day throughout the time that the survey is being conducted.
  • Select clients with consideration to overall client volume, to improve survey efficiency. For example, if you need to interview 100 clients across 10 facilities, then you will likely select more clients from a facility which sees 10 clients per day than from one which sees 3 clients per day.
  • Multiple clients will be selected from each site. Due to a correlation in responses from the same site, there is a tradeoff between increasing the number of total sites (which can reduce overall sample size required for clients) and surveying more clients per site (which can be more resource efficient).
  • A good online sample size calculator can be found here

 

There are many things to consider when figuring out how many clients to sample, and from where. Considerations will differ based upon whether the service is a single site (or single hotline), or if multiple facilities are in the sampling frame.

 

In selecting providers from a site, please start by estimating or knowing the range of total number of ‘eligible’ providers at each site. An eligible provider for this survey should be anyone who would provide the abortion or PAC service to a client. Unless the facility is very large, we recommend interviewing up to four eligible providers per site. In practice, most health facilities may only have one or two eligible providers, while in a pharmacy, it may be necessary to consider all employees who may provide a client with medical abortion products. In the event that there are more than four eligible providers, providers should be listed and randomly selected.

Members of the Abortion Care Quality research team may be available to assist users in sample size determination. Please contact us at communications@m4mgmt.org.

If this survey is administered repeatedly to the same sampling frame of service providers, the objective would be to monitor quality over time, or to detect changes after an event, such as a training or quality intervention. If monitoring quality over time, we recommend a minimum of 12 months between survey rounds. If the objective is to understand results of a quality improvement intervention, we would recommend waiting at least 1 month after the intervention or training is complete. Waiting too long after the intervention, such as more than 6 months after its completion, may confound any effects seen after the intervention. This is dependent upon the nature of the intervention and other contextual factors.

In our feasibility testing, we recorded the average time required to complete the survey tools using tablet-based data collection.

 

These were:

  • Site checklist in facilities – 11 minutes
  • Provider survey – 17 minutes
  • Client exit interview – 11 minutes
Some questions have to be changed for contextual reasons. For example, in our pilot testing in Bangladesh, all surveys replaced “abortion” with “menstrual regulation/ post abortion care” (MR/PAC), as this was the appropriate terminology for the care being provided. While the vast majority of the questions in the surveys have been tested across 3 country and service provision contexts, with translation, we recognize that additional use cases may arise. We recommend that the revised question and/or answer options maintain as closely as possible the spirit of the original, English, question.
The questionnaires provided are the minimum set of questions required to assess all 29 metrics of quality. Yet, an organization choosing to use this tool might want to ask additional questions. In particular, you will note that we have not included any demographic questions in the client survey. It may be interesting to understand who is served at a particular facility, and/or if there are any differences in responses by client characteristics. These might include a questionnaire face sheet (with items such as survey date, interviewer specifics, and recordation of consent), client socio-demographic characteristics (age, socio-economic status, parity, marital status, residence, education) or factors associated with the pregnancy and service (gestational age at the time of service, date of last menstrual period, whether the service was induced or post-abortion care). Similarly, provider characteristics, such as provider age, level of training/qualifications, time at the facility, and time since last training on abortion could be of interest.

Providers’ clinical competence is being assessed by proxy, through an assessment of provider knowledge. During the pilot testing, clinical competence was assessed via direct observation by a project clinician. For the final metric, the project team chose to offer a knowledge-based proxy assessment instead, which was designed by a clinical team, and feasibility tested in Ethiopia and Nigeria with health care service providers, as well as reviewed by Bangladeshi clinicians.

 

While often considered the gold standard, direct observation is difficult for resource and logistical reasons – notably that the clinic must have at least one, and preferably more than one, client during the observation period. Typically, providers are ‘certified’ as competent via observation during training. The ACQ’s knowledge-based proxy indicators are useful for assessing skill retention among trained providers. It assesses those pieces of knowledge that expert clinicians deemed most important for providing high-quality abortion services.

The ACQTool was the product of a multi-year, multi-phase, stakeholder-driven research process in which indicators used to assess abortion service quality were catalogued and then refined based on their association with client outcomes and satisfaction, resulting in an evidence-based, standardized set of service quality metrics. One-hundred eleven indicators were field tested in diverse service provision contexts in Bangladesh, Ethiopia, and Nigeria, and were assessed against 12 outcomes. Of these, 34 were associated with at least one outcome. Additional analyses assessed indicators against an index of outcomes. For those indicators which were not associated with any outcomes, the ASQ Resource Group was asked to indicate whether they felt the indicator was still critical to measure. This information was compiled, and further simplification was based upon criteria such as ease of use and generalizability. Of the 29 indicators in the ACQTool metric, 23 were associated with outcomes and six were included based on recommendation by the study’s expert review panel.

Thresholds were decided by expert consensus, and without consideration for the actual results from the pilot and feasibility tests. In support of the goal to create a determination of achieving high quality, the thresholds are high. Thresholds are set at 80%, 90% or 100%, depending on the indicator.

Yes. While the ACQTool was initially tested and validated in a collection of health facilities, pharmacies, and hotlines in Bangladesh, Ethiopia, and Nigeria, the group of abortion service quality experts responsible for the tool’s development strongly believe that it has widespread applicability across all country contexts. Users are encouraged to select and evaluate the ACQTool domains and indicators that applicable to their settings and services. The ACQTool is recommended for use in other in- and out-facility settings – such as in telemedicine services; licensed or unlicensed private drug outlets; or any other drug dispensing models that may be present in your country.
Skip to content