Unofficial Economist

Are we murderers for not donating our organs? [repost]

Zell Kravinsky risked his life to donate his healthy kidney to a complete stranger. Would you do the same?

Kravinsky is a radical altruist. He believes in giving away as much as possible to others, including his nearly $45 million fortune and his own body parts. Most people would consider donating a kidney as going above and beyond, but Kravinsky told the New Yorker in 2004 that he considers anyone who doesn’t donate their extra kidney a murderer.

We probably don’t, as individuals, have a moral responsibility to donate our organs, but maybe we do have a societal responsibility to find a system by which we can match kidney donors and recipients so that no one has to die just because there isn’t a transplant available. In 2012, there were 95,000 Americans on the wait list for a life-saving kidney, according to economists Gary Becker and Julio Elias. The average wait time for a kidney in 2012 was over four years.

Becker and Elias are proponents of creating a formal, legal market for organs to eliminate long wait times and better match recipients with donors. Right now, it is illegal to sell your organs in most of the world, including in the U.S.

The main risks of monetary compensation for organ donations are the coercion of unwilling donors, the potentially unequal distribution of donors — poor people would be more likely to become donors, and the moral question of whether or not it is okay to sell body parts, even if they are our own.

Purely moral arguments aside for a moment, there are ways to alleviate the risks of a market for organs. Waiting periods between registration and donation, psychiatric evaluation ahead of registration as an organ donor, and strict identification requirements or even background checks can all combat coercion in the market for organs, while saving the lives of the many Americans who die on an organ waitlist. Becker and Elias also point to the fact that people in lower income brackets are disproportionately affected by long waitlists: the wealthy can fly abroad to obtain a healthy organ or manipulate the current waitlist system in their favor, while poorer Americans face longer wait times. While donors may be disproportionately poor, which raises concerns of implicit economic coercion, the lower income brackets also benefit disproportionately from the policy.

Even more powerful than a legal market alone would be a combination of a legal market for organs and an implied consent law, which would mean people would have to opt out of being an organ donor, rather than the U.S. standard of opting into being a donor. A 2006 study by economists Alberto Abadie and Sebastien Gay found that implied consent laws have a positive impact on organ donations. Under a combination of these two initiatives, essentially all organ donor needs might be met, and a person’s will might come to include provisions for their organs to be harvested and family members to be compensated.

While Kravinsky donated his kidney for free, he once offered a journalist $10,000 to donate a kidney to a stranger, according to Philadelphia Magazine. But the journalist backed out of the deal he struck with Kravinsky after his wife and friends convinced him not to go through with it. They convinced him that the risk of surgery, though relatively minor, was not worth saving a life. But if a safe, legal market for organ sales is established, perhaps the establishment of a market price for organ donation and a normalization of the procedure will allow Americans to save lives and make money, without requiring Kravinsky’s extreme, and perhaps aggressive, sort of altruism.

Originally written for my Economics of Sin senior seminar, spring 2017; previously published at the Unofficial Economist on Medium.

Beautiful / Anonymous : “If they say no, they mean no”

CW: sexual assault

A colleague recommended I listen to this latest episode (April 23, 2018).

The man who calls had been molested by a male babysitter as a child, and then raped/sexually assaulted by two different women in the course of his high school years. It was an intense soundtrack to my first run in weeks. The caller is raw and open with his vulnerability.

It really helps you understand viscerally the broad spectrum of sexual experiences, traumas, and approaches people may have. The whole episode is a clear, loud call for more communication, more openness, and more thoughtfulness in our sexual lives.

Is my job moral? [repost]

If I continue on my current career path, I may end up arbitrating who lives and who dies. (And maybe I’ll tell their story in an economics journal and make a living doing so.)

I am planning on pursuing a career in development work, specifically in the evaluation of development programs. The “gold standard” for evaluating programs is a Randomized Control Trial (RCT).

Consider a non-profit distributing books to children with the goal of improving literacy. The non-profit wants to know whether their books really have any impact on children’s literacy. Ideally, they could look at what happens when they give a group of children the books and also what happens when they don’t give the same children books.

However, due to thus far unchangeable time-space continuum properties, this isn’t possible. So, in order to confidently say that their books had an impact, the non-profit needs to compare the literacy scores of children who received the books with other very similar children who didn’t get books. Let’s say they hire me to run an RCT for this very purpose.

To determine which children will get the books (the treatment group) and which children will serve as the comparison group (the control group), I take a list of 100 schools and randomly assign half of them to receive the extra books program. After the books are distributed and some time has passed, I go back to the schools and I have all the children take literacy tests. I compare the test scores of children in each group, and find that, on average, children who received books did much better on the literacy tests.

The non-profit is very happy and uses the results to convince more people to donate to their program. Now they can give books to many more children, and presumably those children’s literacy scores will also increase.

This is all good and well. Even if some children in the study were chosen not to receive books, there are several commonly accepted justifications for why we studied them without providing a service:

  • The non-profit did not have enough money to give books to all the schools anyway. Randomly determining which schools received the books makes it as fair as possible.
  • While the books program was unlikely to have negative effects on children, we didn’t know if it would have no effect or a positive effect at the start. So we didn’t know if we were really depriving children of a chance to improve their literacy.
  • Being able to conduct the evaluation could inform policy and global knowledge on effective ways to improve literacy, and could improve decision-making at the non-profit.
  • In this case, maybe the control group children were the first to receive books when the non-profit’s funding increased.

These are common justifications for development evaluations. They seem quite reasonable — randomly giving out benefits might be the fairest option, we don’t know what the effect really is, and the study will contribute to our shared knowledge and lead to better decisions and even better outcomes in the future.

What if, instead of working on literacy, the non-profit wanted to reduce deaths from childbirth by improving access to and use of health facilities by pregnant women?

Suddenly, so much more is at stake.

If I randomly assign half a county to have access to a special taxi service that drives pregnant women to hospitals for safer deliveries, and one of the women who was assigned NOT to receive the taxi service dies because she gave birth at home, is the evaluation immoral? Am I morally culpable for her death?

Because I work with numbers and data, it is easy to separate myself from the potential negative consequences of the work. I didn’t choose her to die — the random number generator made me do it. 

Photo by Markus Spiske on Unsplash

So what if we’re in a situation where a randomized control trial seems immoral? How can we still learn about what works and what doesn’t?

There are other evaluation methods that can give us an idea of what programs work and which don’t. For example, quasi-experimental methods look at situations where comparable control and treatment groups are incidentally defined by the implementation of a policy. Then we can compare two groups without having to be responsible for directly assigning some people to receive a program while others go without.

Qualitative or other non-experimental methods involve gathering data by talking to people, doing research, and meeting with different groups to get various opinions on what’s happening. These methods can also help paint a picture of whether a program is having a positive effect.

But the RCT is the gold standard for a reason. A well-designed RCT can tell us what the effect of a program is with much higher confidence and precision than other methods.

UNICEF Social Policy Specialist Tia Palermo recently wrote a post titled “Are Randomized Control Trials Bad for Children?” for UNICEF’s Evidence for Action blog. She makes a powerful point to consider: What are the alternatives to running RCTs? Are they better or worse?

Palermo sees the alternative as worse: “Is it ethical to pour donor money into projects when we don’t know if they work? Is it ethical not to learn from the experience of beneficiaries about the impacts of a program?” she asks.

Her most convincing argument is that there are ethical implications every research method we might choose:

“A non-credible or non-rigorous evaluation is a problem because underestimating program impacts might mean that we conclude a program or policy doesn’t work when it really does (with ethical implications). Funding might be withdrawn and an effective program is cut off. Or we might overestimate program impacts and conclude that a program is more successful than it really is (also with ethical implications). Resources might be allocated to this program over another program that actually works, or works better.”

And there are ethical implications to not evaluating programs at all. If non-profits aren’t held to any standard and don’t measure the effect of their program at all, there’s no way to tell which interventions and which non-profits are helping, having no effect on, or even harming the program recipients.

In the case of the woman who died because she didn’t get to a health facility, if the study had never taken place, would she have gotten to a health facility or not? It is impossible to know what would have happened, but it’s not impossible to minimize the risk of harm and maximize the benefits to all study participants. 

Photo by Anes Sabitovic on Unsplash

Ultimately, RCTs generate important evidence when they are well executed. The findings from such studies can be used to make better decisions at non-profits, at big donor foundations like the Gates Foundation or GiveWell, and at government agencies. All of which can lead to more lives saved, which is the ultimate goal.

So what to do about the ethical implications of randomly determining who gets access to a potentially life-saving program? Or any program that could have a positive impact on people’s lives?

There are a variety of measures in place to ensure ethical conduct in research and many more ~official~ economists are thinking about these ideas.

The 1979 Belmont Report in helped establish criteria for ethics in human research, focusing on respect for people’s right to make decisions freely, maximizing benefits and doing no harm, and fairness in who bears any risks or benefits. Institutional Review Boards (IRBs) are governing bodies that ensure these principles are being upheld for all research.

Economists Rachel Glennerster Shawn Powers wrote a highly recommended piece on these ethical considerations, “Balancing Risk and Benefit: Ethical Tradeoffs in Running Randomized Evaluations,” which I’m currently reading.

Yet persistent concerns about how to run ethical evaluations suggest that there is more work to do.

Taking the time to consider the ethical implications of each project is key. And I think there is more room for evaluators to read deeply on the subject and really dig into how to make evaluations more just and more beneficial to even those in the control group who don’t receive the program.

A driving principle, especially for researchers running RCTs in the development field, could be that an evaluation must have a direct positive impact on all study participants, either during the study or immediately following its completion. There are a variety of ways, some more commonly used than others, that researchers can apply this principle:

  • If we truly don’t know whether the effect of the program is positive or negative, we can make plans to provide the program to control households if it is found to have a positive effect.
  • If we suspect the program has a positive effect, the control group can be offered the program immediately after the study period has ended.
  • We can offer everyone in the study a base service, while the study tests the effectiveness of an additional service provided only to the treatment group. This way, everyone who is contributing time and information to the study receives some benefit in return.
  • Extensive piloting (testing different ideas and aspects of the evaluation before the start of the study) can also reveal potential moral dilemmas to evaluating any particular program.
  • Community interest meetings can be held before the study is implemented to gain community-level consent to participate in the study. These meetings could also be held quite early on to inform research designs and improve the quality of the study results. For example, in some cultures, it is not appropriate for a man to be alone with a woman he is not related to. If this is the case in a study area, then hiring male staff to conduct surveys would lead to a less successful study.
  • Local staff can be hired to conduct any surveys or data collection to ensure that the surveys are culturally appropriate.
  • We always obtain full and knowledgable consent from participants, which may require translating surveys into participants’ native language.
  • If study participation requires much time or effort from control group individuals, they can be appropriately compensated.
  • All reports on evaluations (RCTs and other designs) can be fully transparent about research decisions and how ethical concerns were addressed. This will contribute to the international research community’s combined knowledge of how to ensure the rights of participants are provided for in RCTs and other research.
  • The learnings from the study can also be shared with the participating community and should add to their knowledge about their own lives; contributing to the abstract “international research community” is not enough.

Enacting these measures requires more of researchers: some have the potential to affect the legitimacy of the evaluation results if they are not properly accounted for in analysis. But a strong sense of ethics and a dedication to the population being served (often low-income individuals from the Global South, contrasted with well-off researchers from the West) demand that we take the extra time in our research to consider all ethical implications.

Originally published on my Unofficial Economist Medium publication, November 4, 2017.

How should I use my professional development time?

So much learning I want to do

  • Coding classes: Advanced R, Intro to Python, Machine Learning
  • Reading academic articles
    • In economics
    • In global health sector
  • Reading development-related articles
  • Summarizing/critiquing work-related articles
    • And post online
    • And share on internal knowledge management channels
  • Read Poor Economics for real (embarrassed I’ve only read half of it though)
  • Read Field Experiments book
  • Stata challenges from work
  • Plan a brown bag lunch presentation or chai & chat on a topic that interests me
    • An opportunity to practice presenting a slide deck
  • Read/plan for Tech Team Bookclub meetings on Machine Learning
  • Create a mapping portfolio by doing GIS challenges (possible??)

So little time…

I have blocked out three hours a week for my own PD. What do I want to prioritize? How scheduled/organized should I be about this?

I want to use the time for a mix of projects. This week, I want to read and write about one academic article related to health care and economics. It’s something I’ve been meaning to do. That should take 2 hours – I’ll polish and post my thoughts on my own time. Then, I can use the rest of the time to investigate what kind of mapping questions I can start looking into. I’m very excited about maps.

Long-term, I can plan to split it up into 2-3 chunks so I can make some progress on each of my projects across time.

  • I’ll come back to the coding and the books later
  • I’ll do Tech Team Bookclub and Stata challenges as they arise at work
  • I’ll plan for a mix of reading & writing about articles and GIS for now
    • Maybe once I have some cool maps made, I’ll do a brown-bag and an internal blog post about spatial data in Africa and how it’s relevant for IDinsight

This week

2 hours: I want to break down what heterodox vs. pluralistic vs. mainstream economics are. The idea of alternative economic models really appeals to me, but I don’t know what the big distinctions or points of conflict are. I’ll find some sources on my own time this week, and on Friday, read them and write up a summary for here.

1 hour: Investigate spatial data available for Kenya, maybe read an article on general spatial data quality in Africa.