This is an Annotation for Transparent Inquiry (ATI) data project.
Project Summary
Scholarship on human rights diplomacy (HRD)—efforts by government officials to engage publicly and privately with their foreign counterparts—often focuses on actions taken to “name and shame” target countries, because private diplomatic activities are unobservable. To understand how HRD works in practice, we explore a campaign coordinated by the US government to free twenty female political prisoners. We compare release rates of the featured women to two comparable groups: a longer list of women considered by the State Department for the campaign; and other women imprisoned simultaneously in countries targeted by the campaign. Both approaches suggest that the campaign was highly effective. We consider two possible mechanisms through which expressive public HRD works: by imposing reputational costs and by mobilizing foreign actors. However, in-depth interviews with US officials and an analysis of media coverage find little evidence of these mechanisms. Instead, we argue that public pressure resolved deadlock within the foreign policy bureaucracy, enabling private diplomacy and specific inducements to secure the release of political prisoners. Entrepreneurial bureaucrats leveraged the spotlight on human rights abuses to overcome competing equities that prevent government-led coercive diplomacy on these issues. Our research highlights the importance of understanding the intersection of public and private diplomacy before drawing inferences about the effectiveness of HRD.
Data Generation
We generated four sources of data for this project:
1. A dataset of political prisoners from 13 countries based on Amnesty International Urgent Action reports between 2000 and 2015.
2. Arrest and release information for a dataset of female political prisoners
3. A dataset on media attention based on both news articles from LexisNexis and online search trends from Google Trends
4. Interviews conducted with U.S. government officials and other human rights advocates involved in the #Freethe20 campaign to free political prisoners launched in September 2015
We used two sources of data for each of our two research questions. Our first research question was: Did the #Freethe20 campaign have an impact on the release rate of political prisoners? In an ideal world, we would have a comprehensive set of female political prisoners to compare with #Freethe20 prisoners. However, as we explain in the manuscript, in countries with more dire human rights situations, arrests often go unreported. In some cases, the sheer volume of political prisoners makes chronicling information about them challenging, if not impossible. Therefore, in order to construct a comparable set of cases, one strategy we used was to collect information from Amnesty International’s “Urgent Action” campaigns. To our knowledge, Amnesty International has the most comprehensive, publicly available list of contemporary political prisoners globally. Their records are public and searchable, which allowed us to construct a population of political prisoners from the countries targeted by the #Freethe20 campaign. We began our data collection with a base set of Urgent Actions metadata generated by Judith Kelley and Dan Nielson via webscraping from the Amnesty International website. Using a list of URLs that linked to each Urgent Action Report, we coded the name and sex of individuals featured in each Urgent Action Report from 2000 through September 2015 (the start of the #Freethe20 campaign) in the 13 countries featured in the campaign (Azerbaijan, Burma, China, Egypt, Ethiopia, Eritrea, Iran, North Korea, Russia, Syria, Uzbekistan, Venezuela, and Vietnam). Instructions about how we coded this information and sample documents are available in the QDR repository (QDR: MyrickWeinstein_codebook_urgentaction.pdf).
After compiling a base dataset of individuals featured in Urgent Action reports, we identified the women in the dataset (~17% of entries) and conducted additional research about (1) whether these women could be classified as political prisoners, and (2) whether and when these women were released from prison, detention, or house arrest. Here, we relied on both follow-up reporting from Amnesty International as well as a variety of online news sources. We deposited the coding instructions for this process (MyrickWeinstein_codebook_releaseinfo.pdf) and also include documentation on additional online news sources that we used to make a judgment on a particular case.
Our second question was: How and under what conditions did #Freethe20 affect the release rate of female political prisoners? To answer this question, we look at strategies of both public pressure and private, coercive diplomacy. For the former, we collected data on media attention and online search trends. We searched for newspapers and news articles that featured individuals on the #Freethe20 list using Lexis Nexis. For each woman, we downloaded metadata of their news coverage from Lexis Nexis and stored these HTML files—which were used to conduct a quantitative analysis of media coverage—with the QDR repository (QDR: MyrickWeinstein_lexisnexis_lastname.HTML). In addition, we gathered online search data between September 2013 and September 2018 for each individual featured on #Freethe20 from Google Trends, a tool that captures worldwide search interest on Google over time (QDR: MyrickWeinstein_data_f20_googletrends.csv).
To assess whether government officials provided “carrots and sticks” privately to induce the release of specific prisoners, we relied on elite interviews. We conducted ten in-depth interviews between August 2018 and January 2019 with the government officials and human rights advocates involved in the construction and implementation of the #Freethe20 campaign. We used snowball sampling to recruit participants, beginning with officials working in the office of the U.S. Mission to the United Nations during the time the #Freethe20 campaign was conceived and launched. We were able to interview all ten of the individuals we contacted for roughly 30-60 minutes. We further corroborated case-level details with another fifteen government officials working on specific cases. We store with QDR a list of questions used to structure the interviews (MyrickWeinstein_interview_questionnaire.pdf). Because these interviews were conducted anonymously, we are unable to include full interview transcripts.
Data Analysis
First, to evaluate the effectiveness of the campaign, we used non-parametric and semi-parametric forms of survival analysis to compare the release rates of women featured in the #Freethe20 campaign to release rates of comparable female political prisoners. We provide more details about this quantitative analysis in the manuscript. In terms of coding the underlying qualitative sources used to generate this quantitative data, in the QDR repository, we provide detailed coding instructions for how we developed the base set of political prisoners and identified whether they were male or female (MyrickWeinstein_codebook_urgentaction.pdf), and how we identified and documented their release information ( MyrickWeinstein_codebook_releaseinfo.pdf).
Second, to evaluate whether public pressure via “naming and shaming” impacted the release rate of #Freethe20 women, we conducted quantitative analyses of media attention, drawing both on news coverage from LexisNexis and online search interest from Google Trends, as described in the previous section. We detail the regression analyses we used in the manuscript, and we provide the underlying raw data in the QDR repository.
Third, to evaluate whether government officials used “carrots or sticks” privately, we used elite interviews. In interpreting these results, we generally relied on a few principles. Where there were discrepancies in the recollections of different interviewees, we tended to rely on the interviewee with the closest firsthand knowledge of events. During the course of our interviews, we probed respondents by asking them to think through an opposing or skeptical viewpoint. Where possible, we corroborated the information provided by interviewees about specific cases with online sources, including government reports and news articles.
Logic of Annotation
We used annotations in two ways. First, we used annotations to contextualize qualitative evidence we obtained from interviews. We wanted to ensure that our evidence was compelling (i.e. that claims and quotes were not “cherry picked” from interviews or taken out of context), but we also needed to maintain the anonymity of our sources. Annotations allowed us to balance these concerns: we could provide context for the quote without including the full transcript, which would risk revealing the identity of the source. These annotations consist primarily of a source excerpt from the interview and an analytic note that puts the claim or quote in context with respect to the rest of our evidence.
Second, we used annotations to make quantitative analyses in the paper more accessible to readers. For example, we include annotations throughout the paper that describe figures and tables in non-technical terms.