3ie: Improve power calculations with a pilot

3ie wrote on June 11 about why you may need a pilot study to improve power calculations:

  1. Low uptake: “Pilot studies help to validate the expected uptake of interventions, and thus enable correct calculation of sample size while demonstrating the viability of the proposed intervention.”
  2. Overly optimistic MDEs: “By groundtruthing the expected effectiveness of an intervention, researchers can both recalculate their sample size requirements and confirm with policymakers the intervention’s potential impact.” It’s also important to know if the MDE is practically meaningful in context.
  3. Underestimated ICCs: “Underestimating one’s ICC may lead to underpowered research, as high ICCs require larger sample sizes to account for the similarity of the research sample clusters.”

The piece has many strengths, including that 3ie calls out one of their own failures on each point. They also share the practical and cost implications of these mistakes.

At work, I might be helping develop an ICC database, so I got a kick out of the authors’ own call for such a tool…

“Of all of the evaluation design problems, an incomplete understanding of ICCs may be the most frustrating. This is a problem that does not have to persist. Instead of relying on assumed ICCs or ICCs for effects that are only tangentially related to the outcomes of interest for the proposed study, current impact evaluation researchers could simply report the ICCs from their research. The more documented ICCs in the literature, the less researchers would need to rely on assumptions or mismatched estimates, and the less likelihood of discovering a study is underpowered because of insufficient sample size.”

…although, if ICCs are rarely reported, I may have my work cut out for me!

You have to pay to be published??

Clockwise from top left: Dr. Francisca Oboh-Ikuenobe, Dr. Nii Quaynor, Mohamed Baloola, Dr. Florence Muringi Wambugu.

I was reading about the new African journal – Scientific African – that will cater specifically to the needs of African scientists. Awesome!

Among the advantages of the new journal is the fact that “publication in Scientific African will cost $200, around half of what it costs in most recognised journals.”

Wait.

You have to pay to be published in an academic journal? Dang.

I guess that cost is probably built into whatever research grant you’re working on, but in most other publications, I thought writers got paid to contribute content. I guess it’s so that there’s not a direct incentive to publish as much as possible, which could lead to more falsified results? Although it seems like the current model has a lot of messed up incentives, too.

“What are people currently doing?”

Andrew Gelman’s recent blog post responding to a Berk Özler hypothetical about data collection costs and survey design raised a good point about counterfactuals that I theoretically knew, but was phrased in a way that brought new insight:

“A related point is that interventions are compared to alternative courses of action. What are people currently doing? Maybe whatever they are currently doing is actually more effective than this 5 minute patience training?”

It was the question “What are people currently doing?” that caught my attention. It reminded me that one key input for interpreting results of an RCT is what’s actually going on in your counterfactual. Are they already using some equivalent alternative to your intervention? Are they using a complementary or incompatible alternative? How will the proposed intervention interact with what’s already on the ground – not just how will it interact in a hypothetical model of what’s happening on the ground?

This blogpost called me to critically investigate what quant and qual methods I could use to understand the context more fully in my future research. It also called me to invest in my ability to do comprehensive and thorough literature reviews and look at historical data – both of which could further inform my understanding of the context. And, even better, to always get on the ground and talk to people myself. Ideally, I would always do this in-depth research before signing onto the kind of expensive, large-scale research project Özler and Gelman are considering in the hypothetical.

“Obviously” in academic writing

Academic writing is full of bad habits. For example, using words like “obviously,” “clearly,” or “of course.” If the author’s claim or reasoning really is obvious to you, these words make you feel like you’re in on the secret; you’re part of the club; you’ve been made a part of the “in” group.

But when you don’t know what they’re talking about, the author has alienated you from their work. They offer no explanation of the concept because it seems so simple to them that they simply won’t deign to explain themselves clearly to those not already “in the know.”

Part of an academic’s job is to clearly explain every argument in their papers. It is lazy and exclusionary to imply readers should already understand a concept or a path of reasoning.

At worst, it just makes you sound rude and superior:

“Advertising is, of course, the obvious modern method of identifying buyers and sellers.” – Stigler, “The Economics of Information”

He really doubled-down on how evident this fact is, which only tells the reader how smart he thinks he is. The sentence could have read, “Advertising is the preferred modern method of identifying buyers and sellers,” and could have included a citation.

On the other hand, a non-exclusionary use of “obviously”:

“Obviously, rural Ecuador and the United States are likely to differ in a large number of ways, but the results in this (and other recent) papers that show a shifting food Engel curve point to the risks inherent in assuming that the Engel curve is stable.” – Shady & Rosero paper on cash transfers to women

The authors had previously compared two papers from two very different contexts; they use “obviously” to acknowledge the potential issues with comparing these two settings. This is an acceptable use case because the statement that follows actually is obvious and is bringing any reader on board by acknowledging a possible critique of the argument. It is an acknowledgement of possible lack on the author’s part, rather than a test of the reader’s intelligence or prior knowledge.

Grounded Theory, Part 1: What is it?

Photo by Calum MacAulay on Unsplash

I recently read Brené Brown’s Daring Greatly. The book presents Brown’s research, but it can feel more like a personal guidebook to tackling issues of vulnerability and shame.

Because the research has a conversational feel, it’s hard to understand how much of the book is based in research and how much in Brown’s individual experiences. She weaves in personal stories frequently, often to demonstrate a prickly emotional experience that was common across her interviews. But when I reached the end of the book, I wanted to know how she drew these theories from the data. I’ve only worked sparingly with qualitative data: how does one “code” qualitative data? How do you analyze it without bringing in all sorts of personal biases? How do you determine its replicability, internal and external validity, and generalizability?

Ingeniously, Brown grounds the book in her research methods with a final chapter on grounded theory methodology. Her summary (also found online here) was a good introduction to how using grounded theory works and feels. But I still didn’t “get” it.

So I did some research.

Grounded Theory

Brown quotes 20th century Spanish poet Antonio Machado at the top of her research methods page:

“Traveler, there is no path. / The path must be forged as you walk.”

This sentiment imbued the rest of the grounded theory (GT) research I did. Which seemed bizarre to a quant-trained hopeful economist. I’m used to pre-analysis plans, testing carefully theorized models, and starting with a narrow question.

Grounded theory is about big questions and a spirit of letting the data talk to you.

Founded by Barney Glaser and Anselm Strauss in 1967, GT is a general research methodology for approaching any kind of research, whether qual- or quant-focused. When using GT, everything is data – your personal experiences, interviews, mainstream media, etc. Anything you consume can count, as long as you take field notes.

Writing field notes is one of the key steps of GT: coding those notes (or the data themselves – I’m still a little blurry on this) line-by-line is another. The “codes” are recurring themes or ideas that you see emerging from the data. It is a very iterative methodology: you collect initial data, take field notes, code the notes/data, compile them into memos summarizing your thoughts, collect more data based on your first learnings, code those, compile more memos, collect more data…

Throughout the whole process, you are theorizing and trying to find emergent themes and ideas and patterns, and you should actively seek new data based on what your theories are. You take a LOT of written notes – and it sounds like in the Glaserian tradition, you’re supposed to do everything by hand. (Or is it just not using any algorithms?)

Brown describes the data she collected and her coding methodology:

“In addition to the 1,280 participant interviews, I analyzed field notes that I had taken on sensitizing literature, conversations with content experts, and field notes from my meetings with graduate students who conducted participant interviews and assisted with the literature analysis. Additionally, I recorded and coded field notes on the experience of taking approximately 400 master and doctoral social-worker students through my graduate course on shame, vulnerability, and empathy, and training an estimated 15,000 mental health and addiction professionals.

I also coded over 3,500 pieces of secondary data. These include clinical case studies and case notes, letters, and journal pages. In total, I coded approximately 11,000 incidents (phrases and sentences from the original field notes) using the constant comparative method (line- by- line analysis). I did all of this coding manually, as software is not recommended in Glaserian-grounded theory.” [emphasis mine]

The ultimate goal is to have main concepts and categories emerge from the data, “grounded” in the data, that explain what main problem your subjects are experiencing and how they are trying to solve that problem. For example, Brown’s work centers on how people seek connection through vulnerability and try to deal with shame in various health and unhealthy ways. She started with this big idea of connection and just started asking people about what that meant, what issues there were around it, etc. until a theory started to arise from those conversations.

You’re not supposed to have preexisting hypotheses, or even do a literature review to frame specific questions, because that will bias how you approach the data. You’re supposed to remain open and let the data “speak to you.” My first instinct on this front is that it’s impossible to be totally unbiased in how you collect data. Invariably, your personal experience and background determine how you read the data. Which makes me question – how can this research be replicable? How can a “finding” be legitimate as research?

My training thus far has focused on quantitative data, so I’m primed to preference research that follows the traditional scientific method. Hypothesize, collect data, analyze, rehypothesize, repeat. This kind of research is judged on:

  • Replicability: If someone else followed your protocol, would they get the same result?
  • Internal validity: How consistent, thorough, and rigorous is the research design?
  • External validity: Does the learning apply in other similar populations?
  • Generalizability: Do the results from a sample of the population also apply to the population as a whole?

GT, on the other hand, is judged by:

  • Fit: How closely do concepts fit the incidents (data points)? (aka how “grounded” is the research in the data?)
  • Relevance: Does the research deal with the real concerns of participants and is it of non-academic interest?
  • Workability: Does the developed theory explain how the problem is being solved, accounting for variation?
  • Modifiability: Can the theory be altered as new relevant data are compared to existing data?

I also read (on Wikipedia, admittedly), that Glaser & Strauss see GT as never “right” or “wrong.” A theory only has more or less fit, relevance, workability, or modifiability. And the way Brown describes it, I had the impression that GT should be grounded in one specific researcher’s approach:

“I collected all of the data with the exception of 215 participant interviews that were conducted by graduate social-work students working under my direction. In order to ensure inter-rater reliability, I trained all research assistants and I coded and analyzed all of their field notes.”

I’m still a bit confused by Brown’s description here. I didn’t know what inter-rater reliability was, so I had assumed it meant that the study needed to have internal consistency in who was doing the coding. But when I looked it up online, it appears to be the consistency of different researchers to code the same data in the same way. So I’m not sure how having one person do all of the research enables this kind of reliability. Maybe if your GT research is re-done (replicated) by an independent party?

My initial thoughts are that all GT research sound like they should have two authors that work in parallel but independently, with the same data. Each develops separate theories and then at the end, the study can compare the two parallel work streams to identify what both researchers found in common and where they differed. I still have a lot of questions about how this works, though.

Lingering Questions

A lot of my questions are functional. How do you actually DO grounded theory?

  • How does GT coding really work? What does “line-by-line” coding mean? Does it mean you code each sentence or literally each line of written text?
  • Do these ever get compiled in a database? How do you weight data sources by their expertise and quality (if you’re combining studies and interviews with average Joes, do you actively weight the studies)? -> Can you do essentially quantitative analysis on a dataset based on binary coding of concepts and categories?
  • How do you “code” quantitative data? If you had a dataset of 2000 household surveys, would you code each variable for each household as part of your data? How does this functionally work?
  • If you don’t do a literature review ahead of time, couldn’t you end up replicating previous work and not actually end up contributing much to the literature?

And then I also wondered: how is it applicable in my life?

  • Is GT a respected methodology in economics? (I’d guess not.)
  • How could GT enhance quant methods in econ?
  • Has GT been used in economic studies?
  • What kinds of economic questions can GT help us answer?
  • Should I learn more about GT or learn to use it in my own research?

Coming up: Part 2, Grounded Theory & Economics

To answer some of my questions, I want to do an in-depth read of a paper from the 2005 Grounded Theory Review by Frederick S. Lee: “Grounded Theory and Heterodox Economics.” (The journal has another article from 2017 entitled “Rethinking Applied Economics by Classical Grounded Theory: An invitation to collaborate” by Olavur Christiansen that I hope to read, too.)

Are we murderers for not donating our organs? [repost]

Zell Kravinsky risked his life to donate his healthy kidney to a complete stranger. Would you do the same?

Kravinsky is a radical altruist. He believes in giving away as much as possible to others, including his nearly $45 million fortune and his own body parts. Most people would consider donating a kidney as going above and beyond, but Kravinsky told the New Yorker in 2004 that he considers anyone who doesn’t donate their extra kidney a murderer.

We probably don’t, as individuals, have a moral responsibility to donate our organs, but maybe we do have a societal responsibility to find a system by which we can match kidney donors and recipients so that no one has to die just because there isn’t a transplant available. In 2012, there were 95,000 Americans on the wait list for a life-saving kidney, according to economists Gary Becker and Julio Elias. The average wait time for a kidney in 2012 was over four years.

Becker and Elias are proponents of creating a formal, legal market for organs to eliminate long wait times and better match recipients with donors. Right now, it is illegal to sell your organs in most of the world, including in the U.S.

The main risks of monetary compensation for organ donations are the coercion of unwilling donors, the potentially unequal distribution of donors — poor people would be more likely to become donors, and the moral question of whether or not it is okay to sell body parts, even if they are our own.

Purely moral arguments aside for a moment, there are ways to alleviate the risks of a market for organs. Waiting periods between registration and donation, psychiatric evaluation ahead of registration as an organ donor, and strict identification requirements or even background checks can all combat coercion in the market for organs, while saving the lives of the many Americans who die on an organ waitlist. Becker and Elias also point to the fact that people in lower income brackets are disproportionately affected by long waitlists: the wealthy can fly abroad to obtain a healthy organ or manipulate the current waitlist system in their favor, while poorer Americans face longer wait times. While donors may be disproportionately poor, which raises concerns of implicit economic coercion, the lower income brackets also benefit disproportionately from the policy.

Even more powerful than a legal market alone would be a combination of a legal market for organs and an implied consent law, which would mean people would have to opt out of being an organ donor, rather than the U.S. standard of opting into being a donor. A 2006 study by economists Alberto Abadie and Sebastien Gay found that implied consent laws have a positive impact on organ donations. Under a combination of these two initiatives, essentially all organ donor needs might be met, and a person’s will might come to include provisions for their organs to be harvested and family members to be compensated.

While Kravinsky donated his kidney for free, he once offered a journalist $10,000 to donate a kidney to a stranger, according to Philadelphia Magazine. But the journalist backed out of the deal he struck with Kravinsky after his wife and friends convinced him not to go through with it. They convinced him that the risk of surgery, though relatively minor, was not worth saving a life. But if a safe, legal market for organ sales is established, perhaps the establishment of a market price for organ donation and a normalization of the procedure will allow Americans to save lives and make money, without requiring Kravinsky’s extreme, and perhaps aggressive, sort of altruism.

Originally written for my Economics of Sin senior seminar, spring 2017; previously published at the Unofficial Economist on Medium.