“Academic standards are leading us to concentrate on the less important policies. The worst case scenario is development economists risk becoming irrelevant because [they] concentrate on small issues that policy makers don’t think are important.” – Bill Easterly
Summary of interview w/ VoxDev
Growth response to globalization policies (esp. in Africa, e.g.)
BUT Intellectual backlash against globalization
Doubts are legitimate – it’s hard to measure causality of these macro dynamics.
But do have persuasive correlations (high rates of inflation are strongly negatively correlated with growth rates). Linked to worse welfare. But rigorous causality determination is difficult – can’t rule out third factor or reverse causality.
Academics are reluctant to study inflation and growth and globalization; we need to present non-causal correlations if that’s the best we can do. It’s a responsibility of the economist to look at these as honestly as possible, even if isn’t most rigorous type of evidence. It’s what we’ve got!
Not enough research on big-picture policies.
Zimbabwe is relapsing into high inflation – will be very destructive and needs to be studied. Venezuela another example of poor policies around inflation. Not many policy makers / academic economists will think of these as good policies, but they used to be very common in S. America and Africa in 1970s-90s.
Incentives in economic publishing – prioritize rigorous resolution through causality. Young economists: by all means, stick with this! Tenured professors: stylized facts are also useful pieces of evidence. Need to work on these big, non-causal issues, too.
Evidence on small scale programs are less relevant for policy-makers.
Good model of what we should do more of: Acemoglu’s work. Easterly’s own research.
Paradox: development economists really want to talk about these big pictures, but very challenging to publish any research on them b/c of huge prioritization of rigor.
Wish the “brightest young minds in our field” have to do RCTs instead of look at the big questions.
IMF / World Bank are more interested in policy practicalities so they’re not as biased as journals.
(Small) RCTs useful for NGOs / specific aid agency programs.
Governments want institutional reforms or macro policy changes.
In order to have a high-quality writing sample for the RA jobs I’m applying to this fall, I am revamping my thesis! Joy of joys!
I thought about doing this earlier in the year and even created a whole plan to do it, but ended up deciding to work on this blog, learning to code, and other, less horrifying professional development activities.
I say horrifying because the thesis I submitted was HORRIBLY WRITTEN. So so so bad. I cringe every time I look back over it. I had tackled a 6-year project (the length of time it took to write the paper I was basing my thesis on, I later found out) in four months time. Too little of the critical thinking I had done on how to handle the piles and piles of data I needed to answer my research question actually ended up in writing.
I thought it would be a drag to fix up the paper. I didn’t expect to still be as intrigued by my research topic (democracy and health in sub-Saharan Africa!) or to be as enthusiastic about practicing my economic writing. I’m taking the unexpected enjoyment as a positive sign that life as a researcher will be awesome.
I’ve been thinking critically about the question of democracy and health and how they’re interrelated and how economic development ties into each. I’ve read (skimmed) a few additional sources that I didn’t even think to look for last time and I already have some good ideas for a new framing of why this research is interesting and important. The first time around, I focused a lot on the cool methodology (spatial regression discontinuity design) because that’s what I spent most of my time working on.
My perspective on the research question has been massively refreshed by time apart from my thesis, new on-the-ground development experience, and the papers I’ve read in the interim.
My first tasks have been to re-read the thesis (yuck), and then gather the resources I need to re-write at least the introduction. I am focusing on the abstract and introduction as the first order of business because some of the writing samples I will need to submit will be or can be shorter and the introduction is as far as most people would get anyways.
To improve my writing and the structure of my introduction, my thesis advisor – who I can now call Erick instead of Professor Gong – recommended reading some of Ted Miguel’s introductions. I printed three and all were well-written and informative in terms of structure; one of them (with Pascaline Dupas) even helped me rethink the context around my research question and link it more solidly to the development economics literature.
The next move is to outline the introduction by writing the topic sentence of each paragraph (a tip taken from my current manager at IDinsight, Ignacio, who is very into policy memo-style writing) using a Miguel-type structure. I’ll edit that structure a bit, then add the text of the paragraphs.
I have to say I barely understand what’s going on in a “market” … my economic background is very individual- and household-focused.
But understanding the effect of an intervention on a community as a whole, not just on those treated, seems really important. Partially, this is why we look at and consider spillover effects – the effect of an intervention on the neighbors of the treated, who didn’t receive the program themselves.
General equilibrium or market effects investigate a level up from spillover effects and treatment effects – they look at the cumulative impact of the program to the way the economy operates.
I couldn’t explain how one studies a specific market, or what counts as being part of the market, in any clear terms, but I’m still excited for this upcoming paper on the market-level effects of cash transfers, a point that has been debated recently, after evidence of potential negative spillover effects came out.
Disclaimer: I was a little drunk on power (calculations) when I wrote this, but it’s me figuring out that econometrics is something I might want to specialize in!
I think I just figured out what I want to do with the rest of my career.
I want to contribute to how people actually practice data analysis in the development sector from the technical side.
I want to write about study design and the technical issues that go into running a really good evaluation, and I want to produce open source resources to help people understand and implement the best technical practices.
This is always something that makes me really excited. I don’t think I have a natural/intuitive understanding of some of the technical work, but I really enjoy figuring it out.
And I love writing about/explaining technical topics when I feel like I really “get” a concept.
This is the part of my current job that I’m most in love with. Right now, for example, I’m working on a technical resource to help IDinsight do power calculations better. And I can’t wait to go to work tomorrow and get back into it.
I’ve also been into meta-analysis papers that bring multiple studies together. In general, the meta-practices, including ethical considerations, of development economics are what I want to spend my time working on.
I’ve had this thought before, but I haven’t really had a concept of making that my actual career until now. But I guess I’ve gotten enough context now that it seems plausible.
I definitely geek out the most about these technical questions, and I really admire people who are putting out resources so that other people can geek out and actually run better studies.
I can explore the topics I’m interested in, talk to people who are doing cool work, create practical tools, and link these things that excite me intellectually to having a positive impact in people’s lives.
My mind is already racing with cool things to do in this field. Ultimately, a website that is essentially an encyclopedia of development economics best practices would be so cool. A way to link all open source tools and datasets and papers, etc.
But top of my list for now is doing a good job with and enjoy this power calculations project at work. If it’s as much fun as it was today, I will be in job heaven.
Goddess-Economist Seema Jayachandran wrote about economists’ gendered view of their own discipline back in March. Dr. Jayachandran and PhD student co-author Jamie Daubenspeck investigate:
Percent of woman authors on different development topics: Drawing on all empirical development papers from 2007-2017, they find, out of all papers, “51% were written by all men, and 15% by all women. The average female share of authors was 28% (weighting each paper equally).” Gender, health, trade, migration, education, poverty and conflict are the development topics with a greater than average number of woman authors.
Economists’ perspectives on under-researched topics: They show that there is a negative correlation between a topic’s % of woman authors and perceptions the topic is under-researched, a finding they call “a bit depressing.” Same. (They also write that “whether a topic is under-researched are not significantly correlated with the actual number of articles on the topic published in the JDE over our sample period.” So what do these economists even know?)
I love their thoughtful outline of the methodology they used for this little investigation. Describing the world with data is awesome.
I ended up hearing about/reading about several amazing humans this week:
Dr. Nneka Jones Tapia – the clinical psychologist running Cook County Jail – had amazing things to say on the Ezra Klein Show last year in July. She is powerful and thoughtful and doing amazing things to improve prisons in the US.
New Zealand PM Jacinda Ardern gave birth on the 21st. She’s only the second world leader to give birth in office, after Pakistan’s Benazir Bhutto. The best part is that she is 100% unapologetic about being a mother in office, even while she acknowledges the challenges she will personally face in balancing a new baby and work.
These two leaders are just out there in the world leading noble, thoughtful, innovative lives. In love.
And then there’s MJ Hegar, who’s running for Congress against a tea partier in Texas. Her amazingly directed ad shows how enduring her dedication to service has been throughout her life:
My best friend Riley and I made a pact to meditate daily for ten days, starting on Monday. I have done it each day this week and my week has felt fuller and more focused than ever. Not willing to attribute full causality to the meditation, but it definitely has been a tool to start my day well and a reminder throughout the day that I can and want to stay focused and in the moment.
The Ezra Klein Show interviews are always on point, and “The Green Pill” episode featuring Dr. Melanie Joy was no exception. The June 11 show discussed “carnism” – the unspoken ideology that tells us eating animals, wearing animals, and otherwise instrumentalizing them is good.
I’ve been mulling it over for a while now, but the episode’s frank conversation about why veganism is so hard to talk about pleasantly – and why it’s so hard for people to shift from a carnal mindset – motivated me to head back down the vegetarian path.
I was vegetarian for a year or so in college, but now I’m aiming for veganism, or something close. I’m not eating meat and am not actively purchasing or eating eggs or milk. At this point, I’ll eat eggs or milk or other animal products that are already baked into something – a slice of cake, for example. Eventually, I want to phase out pretty much all animal products. But I’m giving myself some space to adjust and dial back the carnism bit by bit. The incremental approach should let me stick to it better.
Cheese will probably be my “barrier food” – apparently this is so common, there’s a webpage that specifically teaches how to overcome the cheese block. (hehe)
They recommend slowly replacing cheese with guac or hummus, and taking a large break from any cheese before trying vegan cheese. (Which won’t be a problem since I doubt there’s any vegan cheese in Kenya to begin with!)
It is not mango season in Kenya, but I had the best mango this week. Maybe because I cut it myself for the first time, making an absolute mess. Or maybe because it was the key ingredient to the first lettuce-containing salad I’ve ever made myself at home. But there’s a lot to be said for a fruit that encourages you to embrace your messy nature.
Take the example of a variable reporting if someone is judged to be very poor, poor, moderately rich, or rich. This could be the outcome of a participatory wealth ranking (PWR) exercise like that used by Village Enterprise.
In a PWR exercise, local community leaders can identify households that are most vulnerable. These rankings can then be used to target a development program (like VE’s graduation-out-of-poverty program that combines cash transfers with business training) to the community members that are most in need.
Let’s say that you want to include the PWR results in a regression analysis as a covariate. You have a dataset of all the relevant variables for each household, including a variable that records whether the household was ranked in the PWR exercise as very poor, poor, moderately rich, or rich.
You need to convert this string variable (text) into a numeric value. You could assign each option a value from 1 to 4, with 1 being “very poor” and 4 meaning “rich” … but you shouldn’t use this directly in your regression.
If you have a variable that moves from 1 to 2 to 3 to 4, you’re implying that there is a linear pattern between each of those values. You’re saying that the effect on your outcome variable of going from being very poor (1) to poor (2) is the same as the effect of going from poor (2) to moderately rich (3). But you don’t know what the real relationship is between the different PWR levels, since the data isn’t that granular. You can’t make the linear assumption.
So instead, you should use four different binary variables in your regression: Ranked “very poor” or not? “Poor” or not? “Moderately rich” or not? “Rich” or not?
This Stata support page does a great job of summarizing how to apply this in your regression code or create binary variables from categorical using easy shortcuts. I like:
reg y x i.pwr
But how do you interpret the results?
When you create dummies (binary variables) out of a categorical variable, you use one of the group dummies as the reference group and don’t actually include it in the regression.
By default, the reference group is usually the smallest/lowest group. In this case, that means “very poor.” So in the regression, you’ll have three dummies, not four. Being “very poor” is the base condition against which to compare the other rankings.
Let’s say there is a statistically significant, positive coefficient on the “moderately rich” dummy in your regression results. That means that, compared to the base condition of being very poor, being moderately rich has a positive effect on your outcome variable.
When I was at Middlebury, I took classes like Famine & Food Security and Economics of Global Health, learning more and more about humanitarian aid and international development. It didn’t really sink in that these were two different sectors until today.
I had a chance to talk to someone who worked for REACH – an organization that tries to collect the most accurate data possible from war zones/humanitarian emergency areas to inform policy. Seem like pretty important work.
Our conversation solidified to me that the humanitarian sector is different from the development sector. The humanitarian sector has a totally different set of actors (dominated by the UN) and missions, although the ultimate mission of a better world is the same.
Development is about the ongoing improvement of individuals living in a comparatively stable system; humanitarian aid is about maintaining human rights and dignities when all those systems break down.
There’s some overlap, of course – regions experiencing ongoing war and violence may be targeted by development and humanitarian programs alike, for example. I also think the vocabulary blurs a bit when discussing funding for development and humanitarian aid.
Development isn’t quite sure how it feels about human rights, though. Rights are good when they lead to economic development, which is equivalent to most development work.
I’d say that my definition of what I want to do in the development sector bleeds over into the human rights and humanitarian arenas. (I’m sure there’s also an important distinction between human rights sector and humanitarian sector – probably that the humanitarian sector is more about meeting people’s basest needs in crisis, although human rights workers also deal with abuses during crises.)
My interest in humanitarian work has been piqued by this conversation today, though. It was also piqued by my former roommate’s description of her work with Doctors without Borders. The idea of going on an intense mission trip for a period of time, being all-in, then taking a break is kind of appealing. Although REACH itself wasn’t described as a great work experience. Really long hours, but fairly repetitive work.
Maybe I should read more about the economics/humanitarian aid/data overlap.
Ӧzler summarizes his main points quite succinctly himself:
Think about the meaningful effect size in your context and given program costs and aims.
Power your study for large effects, which are less likely to disappear in the longer run.
Try to use all the tricks in the book to improve power and squeeze more out of every dollar you’re spending.”
He gives a nice, clear example to demonstrate: a 0.3 SD detectable effect size sounds impressive, but for some datasets, this would really only mean a 5% improvement which might not be meaningful in context:
“If, in the absence of the program, you would have made $1,000 per month, now you’re making $1,050. Is that a large increase? I guess, we could debate this, but I don’t think so: many safety net cash transfer programs in developing countries are much more generous than that. So, we could have just given that money away in a palliative program – but I’d want much more from my productive inclusion program with all its bells and whistles.”
Usually (in an academic setting), your goal is to have the power to detect a really small effect size so you can get a significant result. But Ӧzler makes the opposite point: that it can be advantageous to only power yourself to detect what is a meaningful effect size, decreasing both power and cost.
He also advises, like the article I posted about yesterday, that piloting could help improve power calculations via better ICC estimates: “Furthermore, try to get a good estimate of the ICC – perhaps during the pilot phase by using a few clusters rather than just one: it may cost a little more at that time, but could save a lot more during the regular survey phase.”
My only issue with Ӧzler’s post is his chart, which shows the tradeoffs between effect size and the number of clusters. His horizontal axis is labeled “Total number of clusters” – per arm or in total, Bert?!? It’s per arm, not total across all arms. There should be more standardized and intuitive language for describing sample size in power calcs.
A new paper by Jakiela and Ozier sounds like an insane amount of data work to classify 4,336 languages by whether they gender nouns. For example, in French, a chair is feminine – la chaise.
They find, across countries:
Gendered language = greater gaps in labor force participation between men and women (11.89 percentage point decline in female labor force participation)
Gendered language = “significantly more regressive gender norms … on the magnitude of one standard deviation”
Within-country findings from Kenya, Niger, Nigeria, and Uganda – countries with sufficient and distinct in-country variation in language type – further show statistically significant lower educational attainment for women who speak a gendered language.
(Disclaimer: The results aren’t causal, as there are too many unobserved variables that could be at play here.)
As the authors say: “individuals should reflect upon the social consequences of their linguistic choices, as the nature of the language we speak shapes the ways we think, and the ways our children will think in the future.”
3ie wroteon June 11 about why you may need a pilot study to improve power calculations:
Low uptake: “Pilot studies help to validate the expected uptake of interventions, and thus enable correct calculation of sample size while demonstrating the viability of the proposed intervention.”
Overly optimistic MDEs: “By groundtruthing the expected effectiveness of an intervention, researchers can both recalculate their sample size requirements and confirm with policymakers the intervention’s potential impact.” It’s also important to know if the MDE is practically meaningful in context.
Underestimated ICCs: “Underestimating one’s ICC may lead to underpowered research, as high ICCs require larger sample sizes to account for the similarity of the research sample clusters.”
The piece has many strengths, including that 3ie calls out one of their own failures on each point. They also share the practical and cost implications of these mistakes.
At work, I might be helping develop an ICC database, so I got a kick out of the authors’ own call for such a tool…
“Of all of the evaluation design problems, an incomplete understanding of ICCs may be the most frustrating. This is a problem that does not have to persist. Instead of relying on assumed ICCs or ICCs for effects that are only tangentially related to the outcomes of interest for the proposed study, current impact evaluation researchers could simply report the ICCs from their research. The more documented ICCs in the literature, the less researchers would need to rely on assumptions or mismatched estimates, and the less likelihood of discovering a study is underpowered because of insufficient sample size.”
…although, if ICCs are rarely reported, I may have my work cut out for me!
I was reading about the new African journal – Scientific African – that will cater specifically to the needs of African scientists. Awesome!
Among the advantages of the new journal is the fact that “publication in Scientific African will cost $200, around half of what it costs in most recognised journals.”
You have to pay to be published in an academic journal? Dang.
I guess that cost is probably built into whatever research grant you’re working on, but in most other publications, I thought writers got paid to contribute content. I guess it’s so that there’s not a direct incentive to publish as much as possible, which could lead to more falsified results? Although it seems like the current model has a lot of messed up incentives, too.
Andrew Gelman’s recent blog post responding to a Berk Özler hypothetical about data collection costs and survey design raised a good point about counterfactuals that I theoretically knew, but was phrased in a way that brought new insight:
“A related point is that interventions are compared to alternative courses of action. What are people currently doing? Maybe whatever they are currently doing is actually more effective than this 5 minute patience training?”
It was the question “What are people currently doing?” that caught my attention. It reminded me that one key input for interpreting results of an RCT is what’s actually going on in your counterfactual. Are they already using some equivalent alternative to your intervention? Are they using a complementary or incompatible alternative? How will the proposed intervention interact with what’s already on the ground – not just how will it interact in a hypothetical model of what’s happening on the ground?
This blogpost called me to critically investigate what quant and qual methods I could use to understand the context more fully in my future research. It also called me to invest in my ability to do comprehensive and thorough literature reviews and look at historical data – both of which could further inform my understanding of the context. And, even better, to always get on the ground and talk to people myself. Ideally, I would always do this in-depth research before signing onto the kind of expensive, large-scale research project Özler and Gelman are considering in the hypothetical.
Academic writing is full of bad habits. For example, using words like “obviously,” “clearly,” or “of course.” If the author’s claim or reasoning really is obvious to you, these words make you feel like you’re in on the secret; you’re part of the club; you’ve been made a part of the “in” group.
But when you don’t know what they’re talking about, the author has alienated you from their work. They offer no explanation of the concept because it seems so simple to them that they simply won’t deign to explain themselves clearly to those not already “in the know.”
Part of an academic’s job is to clearly explain every argument in their papers. It is lazy and exclusionary to imply readers should already understand a concept or a path of reasoning.
At worst, it just makes you sound rude and superior:
He really doubled-down on how evident this fact is, which only tells the reader how smart he thinks he is. The sentence could have read, “Advertising is the preferred modern method of identifying buyers and sellers,” and could have included a citation.
On the other hand, a non-exclusionary use of “obviously”:
“Obviously, rural Ecuador and the United States are likely to differ in a large number of ways, but the results in this (and other recent) papers that show a shifting food Engel curve point to the risks inherent in assuming that the Engel curve is stable.” – Shady & Rosero paper on cash transfers to women
The authors had previously compared two papers from two very different contexts; they use “obviously” to acknowledge the potential issues with comparing these two settings. This is an acceptable use case because the statement that follows actually is obvious and is bringing any reader on board by acknowledging a possible critique of the argument. It is an acknowledgement of possible lack on the author’s part, rather than a test of the reader’s intelligence or prior knowledge.
I recently read Brené Brown’s Daring Greatly. The book presents Brown’s research, but it can feel more like a personal guidebook to tackling issues of vulnerability and shame.
Because the research has a conversational feel, it’s hard to understand how much of the book is based in research and how much in Brown’s individual experiences. She weaves in personal stories frequently, often to demonstrate a prickly emotional experience that was common across her interviews. But when I reached the end of the book, I wanted to know how she drew these theories from the data. I’ve only worked sparingly with qualitative data: how does one “code” qualitative data? How do you analyze it without bringing in all sorts of personal biases? How do you determine its replicability, internal and external validity, and generalizability?
Ingeniously, Brown grounds the book in her research methods with a final chapter on grounded theory methodology. Her summary (also found online here) was a good introduction to how using grounded theory works and feels. But I still didn’t “get” it.
So I did some research.
Brown quotes 20th century Spanish poet Antonio Machado at the top of her research methods page:
“Traveler, there is no path. / The path must be forged as you walk.”
This sentiment imbued the rest of the grounded theory (GT) research I did. Which seemed bizarre to a quant-trained hopeful economist. I’m used to pre-analysis plans, testing carefully theorized models, and starting with a narrow question.
Grounded theory is about big questions and a spirit of letting the data talk to you.
Founded by Barney Glaser and Anselm Strauss in 1967, GT is a general research methodology for approaching any kind of research, whether qual- or quant-focused. When using GT, everything is data – your personal experiences, interviews, mainstream media, etc. Anything you consume can count, as long as you take field notes.
Writing field notes is one of the key steps of GT: coding those notes (or the data themselves – I’m still a little blurry on this) line-by-line is another. The “codes” are recurring themes or ideas that you see emerging from the data. It is a very iterative methodology: you collect initial data, take field notes, code the notes/data, compile them into memos summarizing your thoughts, collect more data based on your first learnings, code those, compile more memos, collect more data…
Throughout the whole process, you are theorizing and trying to find emergent themes and ideas and patterns, and you should actively seek new data based on what your theories are. You take a LOT of written notes – and it sounds like in the Glaserian tradition, you’re supposed to do everything by hand. (Or is it just not using any algorithms?)
Brown describes the data she collected and her coding methodology:
“In addition to the 1,280 participant interviews, I analyzed field notes that I had taken on sensitizing literature, conversations with content experts, and field notes from my meetings with graduate students who conducted participant interviews and assisted with the literature analysis. Additionally, I recorded and coded field notes on the experience of taking approximately 400 master and doctoral social-worker students through my graduate course on shame, vulnerability, and empathy, and training an estimated 15,000 mental health and addiction professionals.
I also coded over 3,500 pieces of secondary data. These include clinical case studies and case notes, letters, and journal pages. In total, I coded approximately 11,000 incidents (phrases and sentences from the original field notes) using the constant comparative method (line- by- line analysis). I did all of this coding manually, as software is not recommended in Glaserian-grounded theory.” [emphasis mine]
The ultimate goal is to have main concepts and categories emerge from the data, “grounded” in the data, that explain what main problem your subjects are experiencing and how they are trying to solve that problem. For example, Brown’s work centers on how people seek connection through vulnerability and try to deal with shame in various health and unhealthy ways. She started with this big idea of connection and just started asking people about what that meant, what issues there were around it, etc. until a theory started to arise from those conversations.
You’re not supposed to have preexisting hypotheses, or even do a literature review to frame specific questions, because that will bias how you approach the data. You’re supposed to remain open and let the data “speak to you.” My first instinct on this front is that it’s impossible to be totally unbiased in how you collect data. Invariably, your personal experience and background determine how you read the data. Which makes me question – how can this research be replicable? How can a “finding” be legitimate as research?
My training thus far has focused on quantitative data, so I’m primed to preference research that follows the traditional scientific method. Hypothesize, collect data, analyze, rehypothesize, repeat. This kind of research is judged on:
Replicability: If someone else followed your protocol, would they get the same result?
Internal validity: How consistent, thorough, and rigorous is the research design?
External validity: Does the learning apply in other similar populations?
Generalizability: Do the results from a sample of the population also apply to the population as a whole?
GT, on the other hand, is judged by:
Fit: How closely do concepts fit the incidents (data points)? (aka how “grounded” is the research in the data?)
Relevance: Does the research deal with the real concerns of participants and is it of non-academic interest?
Workability: Does the developed theory explain how the problem is being solved, accounting for variation?
Modifiability: Can the theory be altered as new relevant data are compared to existing data?
I also read (on Wikipedia, admittedly), that Glaser & Strauss see GT as never “right” or “wrong.” A theory only has more or less fit, relevance, workability, or modifiability. And the way Brown describes it, I had the impression that GT should be grounded in one specific researcher’s approach:
“I collected all of the data with the exception of 215 participant interviews that were conducted by graduate social-work students working under my direction. In order to ensure inter-rater reliability, I trained all research assistants and I coded and analyzed all of their field notes.”
I’m still a bit confused by Brown’s description here. I didn’t know what inter-rater reliability was, so I had assumed it meant that the study needed to have internal consistency in who was doing the coding. But when I looked it up online, it appears to be the consistency of different researchers to code the same data in the same way. So I’m not sure how having one person do all of the research enables this kind of reliability. Maybe if your GT research is re-done (replicated) by an independent party?
My initial thoughts are that all GT research sound like they should have two authors that work in parallel but independently, with the same data. Each develops separate theories and then at the end, the study can compare the two parallel work streams to identify what both researchers found in common and where they differed. I still have a lot of questions about how this works, though.
A lot of my questions are functional. How do you actually DO grounded theory?
How does GT coding really work? What does “line-by-line” coding mean? Does it mean you code each sentence or literally each line of written text?
Do these ever get compiled in a database? How do you weight data sources by their expertise and quality (if you’re combining studies and interviews with average Joes, do you actively weight the studies)? -> Can you do essentially quantitative analysis on a dataset based on binary coding of concepts and categories?
How do you “code” quantitative data? If you had a dataset of 2000 household surveys, would you code each variable for each household as part of your data? How does this functionally work?
If you don’t do a literature review ahead of time, couldn’t you end up replicating previous work and not actually end up contributing much to the literature?
And then I also wondered: how is it applicable in my life?
Is GT a respected methodology in economics? (I’d guess not.)
How could GT enhance quant methods in econ?
Has GT been used in economic studies?
What kinds of economic questions can GT help us answer?
Should I learn more about GT or learn to use it in my own research?