Search

Database of sources investigating interventions to reduce meat and animal product consumption

Share with
Or share with link

Authors: 

Ben Stevenson, Independent, orcid.org/0009-0005-3961-0011
Jacob R. Peacock,
Humane and Sustainable Food Lab, Quantitative Sciences Unit, Stanford University, United States, orcid.org/0000-0002-4834-8132
Julia Fabienne Sandkühler
, Department of Psychology, University of Bonn, Germany, orcid.org/0000-0002-5585-9539
Jessica E. Hope
, Humane and Sustainable Food Lab, Quantitative Sciences Unit, Stanford University, Palo Alto, CA, United States, orcid.org/0000-0003-4389-1037
Constanza Arévalo
, Independent, orcid.org/0000-0001-8127-4414
Joanna E. Anderson,
Research Consultant, orcid.org/0009-0001-4155-7775
Maya B. Mathur, Quantitative Sciences Unit and Department of Pediatrics, Stanford University, United States, orcid.org/0000-0001-6698-2607

Introduction

Reducing meat and edible animal product (MAP) consumption is a goal of advocates for animal welfare, the climate, and human health. There is a large and growing empirical literature on MAP reduction interventions, but it remains far from clear which interventions are the most effective. Systematic reviews and meta-analyses can remedy this problem by providing stronger evidence than primary sources about which interventions work in which contexts. Evidence synthesis is particularly important for MAP reduction work because the literature is scattered across multiple disciplines (e.g., environmental science, medicine, and animal advocacy), as well as between academic and gray literature. By default, advocates and researchers will struggle to completely and accurately understand the state of the evidence.

To contribute to solving this problem, we developed a database of sources investigating interventions to reduce MAP consumption, which we are now releasing as a resource to help advocates and researchers evaluate the evidence base. This database contributes to an emerging MAP reduction “meta-literature,” including systematic reviews (see Grundy, 2022 for a systematic review of those reviews), meta-analyses (Di Gennaro et al., 2024; Green et al., 2024; Weikertová and Urban, 2022; Mathur et al., 2021; Nisa, 2018), and a living review (Sleegers et al., 2025). We advance the conversation by providing a large set of sources that (a) observe actual, hypothetical, or self-reported MAP consumption and (b) meet minimally restrictive eligibility criteria.

Our database contains 413 sources published before April 16, 2024 and is available at https://osf.io/jp498. We pre-registered our scoping review used to generate the database, including best-practice guidelines for searching and screening sources (Peacock et al., 2024). All deviations from our pre-registration are reported in this article: most significantly, we reduced the planned extent of forward and backward citation searching, and we did not complete coding of study characteristics. We would be excited to see further research build on our database and conduct deeper investigation with the identified studies.

This article explains how to use our database; its scope (inclusion and exclusion criteria) and how that scope differs from other reviews; and how we collected the data. We conclude by offering some suggestions for making use of the data.

How to use our database

The database and codebook can be accessed on the Open Science Framework at https://osf.io/dnu58/.

Each row in the database reports bibliographic data about one source, which we defined as one bibliographic unit (e.g., journal article, preprint, white paper, report). Many sources are open access; for sources that are not open access, we include a stable hyperlink. Pay-walled sources might be accessed via an academic library or by reaching out to the corresponding author.

The scope of our database and how it differs from other reviews

Previous reviews were often limited by restrictive inclusion criteria, such as focusing on a specific type of intervention or excluding non-experimental research designs. Therefore, we minimized restrictions on our inclusion and exclusion criteria. We used the population, intervention, comparator, outcome, time period (PICOT) framework (Riva et al., 2012) to design our inclusion and exclusion criteria:

Table 1: Eligibility criteria

Inclusion criteriaExclusion criteria
PopulationAny human population.Human populations specifically recruited for having a particular medical diagnosis or a predisposition for a particular medical diagnosis.
InterventionAny intervention to which participants can be practically assigned and which is intended to reduce MAP consumption.
  • “Practically assigned” excludes studies of the effects of relatively immutable demographic or psychological traits; for example, a study of the effects of openness to new experience on meat consumption would be ineligible, unless accompanied by an intervention to manipulate openness.
  • Intention will be established primarily by an expression of authorial intent in the source text and, failing that, by reasonable inference on the part of the reviewers.
  • MAPs will include animal products for human consumption derived from vertebrates (like cows, pigs, chickens, and bony fish), crustaceans (like shrimp or crabs), or mollusks (like octopuses or clams).
Interventions consisting only of changes in price of a food product (e.g., own-price and cross-price elasticities) or participant income.
Comparison
  • A control group, control condition, or a baseline time period in which participants were not exposed to any intervention intended to reduce or eliminate MAP consumption; or
  • Another eligible intervention.
  • Studies that recruit from a population intending to reduce their MAP usage and employ either a baseline time period or another within-subjects measurement as the comparison. (For example, suppose a study of a vegan pledge program recruited participants interested in pledging to go vegan. If the study measured diet change using surveys before and after pledging, the study would be ineligible.)
  • Comparisons for which the mechanism of assignment to comparator groups is unknown. For example, a randomized or quasi-randomized study would be eligible since the mechanism of assignment to comparators is identified as randomization. A pre-post study of a dining hall’s implementation of Meatless Monday would also be eligible, since the date of implementation would serve as the assignment mechanism. However, a prospective cohort study that measures meat consumption during each of three waves and its association with participants’ exposure to documentaries about factory farming would be ineligible if the mechanism by which participants were exposed to documentaries is not detailed by the authors
OutcomeActual or self-reported consumption, selection, or purchase (collectively, usage) of a MAP; or, hypothetical or simulated selection or purchase of these items. For example, purchases might be simulated in an online mock grocery store.
  • Beliefs or attitudes about the consumption of animal products.
  • Usage of plant-based foods, except where these outcomes are directly inverse to consumption of animal-based foods. For example, suppose a study reports outcomes as the percentage of meals sold that are plant-based, p. This study would be eligible if the percentage of non-plant-based meals could be calculated as 1 – p.
  • Intentions or intended future behavior.
Time frameStudies published or otherwise available anytime prior to our final search, April 16, 2024.None.
LanguageTAny language.Sources for which we lack sufficient resources for translation into English.

When we started collecting this data, we had identified 23 relevant existing systematic reviews or meta-analyses.[1] Since we collected our data, more reviews in this research area have been published (e.g., Di Gennaro et al., 2024; Sleegers et al., 2025; and Green et al.. 2024).Sleegers et al. (2025) and Green et al. (2024) both drew on the current database in preparing their reviews.[2] Thus, since the three reviews are similar,we provide an overview of the most important differences between them in Table 2.

Table 2: Summary of eligibility criteria comparing this review, Green et al. (2024), and Sleegers et al. (2025). This table simplifies each review’s inclusion criteria; Table 1 is the authoritative source for this review’s inclusion criteria.

This datasetGreen et al. (2024)Sleegers et al. (2025)
PopulationAny population
InterventionAny intervention
ComparatorRandomized and non-randomized allocation

Comparison against another putatively effective intervention, or a control condition not intended to change MAP usage

More restrictive:

Only randomized allocation, with at least 25 subjects in treatment and control (or at least 10 clusters for studies that were cluster-assigned)

Comparison against a control condition not intended to change MAP usage

More restrictive:

Only randomized allocation

Comparison against a control condition not intended to change MAP usage

OutcomeActual, self-reported, or hypothetical MAP consumption

Any interval between intervention and measurement

More restrictive:

Actual or self-reported MAP consumption

At least one day between intervention and measurement

Less restrictive:

Actual, self-report, hypothetical, or intended MAP consumption, attitudes towards MAP consumption

Any interval between intervention and measurement

Time periodAny time period

We minimized restrictions on eligibility criteria

We included sources investigating any population, and studying any intervention to which participants could be practically assigned and which is intended to reduce MAP consumption.[3] This is significantly broader than many extant reviews, although Sleegers et al. (2025) covers a broader set of outcomes, with more restrictions on comparators.

Previous reviews sometimes focused only on adult populations (e.g., Hartmann and Siegrist, 2017) recruited from Western countries (Taufik et al., 2019) or countries that are members of the Organisation for Economic Co-operation and Development (OECD; Veul, 2018). Some reviews focused only on a specific type of intervention, such as literacy interventions (Di Gennaro et al., 2024), dynamic norms (Weikertová and Urban, 2022), defaults (Meier et al., 2022), or leaflets (Greig, 2017); or on interventions targeting a specific motivation, such as animal welfare (Mathur et al., 2021) or environmentalism (Sanchez-Sabate and Sabaté, 2019). Further, reviewers sometimes limited themselves to a particular context such as universities (Chang et al., 2023) or to a particular type of MAP like red and processed meats (Reynolds et al., 2022).

Restricting eligible populations and interventions could, for example, elicit useful insights for environmental advocates looking to reduce high-carbon meat consumption among Westerners. But researchers and advocates will have a broad range of motivations and will benefit from a broad perspective about any interventions designed to reduce any MAP (including eggs, poultry, fish, and shrimp), through any means, in any context.

We were also more permissive with study designs than other reviews, such as Sleegers et al. (2025) and Green et al. (2024), which only permitted randomized control trials (RCTs). Although RCTs provide favourable conditions for testing causality, excluding other research designs means excluding some natural experiments that may have greater external validity than RCTs conducted in a laboratory setting. Furthermore, randomized studies currently tend to test educational or informational interventions, while choice architecture interventions tend to have not yet been tested in randomized experiments. Thus, reviews excluding non-randomized studies will tend to retrieve a less diverse sampling of interventions. For example, Garnett (2021) tests modifying choice architecture and availability of non-MAPs in university cafeterias, and uses a before-after design to provide comparators; this source would have been excluded if we limited our inclusion criteria to RCTs, but we believe it and sources like it may provide relevant information.

Previous research has suggested that a large proportion of the MAP reduction literature is gray literature (Mathur et al., 2021), published by nonprofits and nongovernmental organisations, but some extant reviews included only peer-reviewed sources (e.g., Reynolds et al. 2022; Kwasny et al. 2022; Hartmann and Siegrist, 2017). We included both peer-reviewed and non-peer-reviewed sources, as did Sleegers et al. (2025) and Green (2024). Finally, we included sources in any language that we could reasonably translate and published at any point in time. This is broader than some reviews, which limited themselves to English language sources (e.g., Green et al., 2024) and/or sources published after a given date (e.g., Taufik et al., 2019 and Veul, 2018).

We included only sources that report on actual, hypothetical, or self-reported MAP consumption

Only sources reporting on changes in MAP consumption were eligible for our dataset. This makes for a narrower ‘outcome scope’ than reviews which assess MAP reduction among a larger set of behavioral changes (e.g., pro-environmental behaviors, as in Byerly et al. 2018 and Nisa, 2018; or other healthy dietary behaviors, as in Taufik et al. 2019) and reviews which also include sources reporting on related psychological constructs (e.g., intention to reduce MAP consumption, as in Weikertová and Urban, 2022; Kwasny et al. 2022; Mathur et al. 2021; Harguess et al. 2020; Bianchi et al. 2018a; Bianchi et al. 2018b; and Hartmann and Siegrist, 2017). These other behavioral and psychological outcomes will be relevant to some research and advocacy directions, but they are imperfect proxies for actual consumption and so not directly relevant to our research question. Sleegers et al. (2025) has a broader outcome scope than our dataset, as it includes sources reporting on intentions and attitudes, but Green et al. (2024) is similarly only concerned with MAP consumption.

Decreases in MAP consumption are not the same as increases in plant-based food consumption. Previous reviews suggest that most research on interventions to increase plant-based consumption focus on increasing fruit and vegetable consumption (Kwasny et al. 2022; Taufik et al. 2019). We excluded sources that reported on changes in plant-based consumption except where they also reported on changes in MAP consumption, or participants were making discrete choices between MAP and plant-based alternatives, such that we could infer their MAP consumption.

Green et al. (2024) has the closest scope of outcome eligibility to our dataset, but we made different decisions about which outcomes to include as measures of MAP consumption (Table 3). Some measurements of food and drink consumption are more accurate than others (Peacock, 2018), meaning that determining inclusion criteria means balancing between the breadth of included studies and quality of outcome measures; we aimed to minimize restrictions while preserving a baseline quality. Another dimension of outcome measurement is the duration of time between treatment and measurement. Green et al. (2024) restricts inclusion to sources with at least one day between the intervention and the outcome, noting that immediate outcome measures could prove an overly optimistic estimate of long-run effects. For example, if some interventions may cause participants to decrease their MAP consumption for the meal at hand, but compensate with increased MAP consumption later. On the other hand, excluding sources that only report short-term outcome measures could mean excluding many or all choice architecture interventions. We therefore included sources regardless of the time between treatment and outcome measurement to avoid excluding a large and potentially promising class of intervention.

Table 3: Comparison between eligible outcomes for this dataset and Green et al. (2024)

This datasetGreen et al. (2024)
Objectively measured MAP consumption (e.g., point-of-sales data)IncludesIncludes
Hypothetical elicitation of MAP consumption (e.g., discrete hypothetical choice experiment)IncludesExcludes
Self-reported MAP consumption (e.g., food diary)IncludesIncludes
Minimum time between treatment and measurementNo minimumOne day

How we collected our data

Initial search strategy

An initial set of 655 sources was identified from 23 existing meta-analyses and systematic reviews published between 2017 and 2022. We aimed to include all meta-analyses and reviews of interventions designed to reduce MAP consumption other than those interventions that failed to meet our eligibility criteria.

We then retrieved gray literature via a non-exhaustive search of 14 nonprofit websites, identifying 109 sources, and searched Rethink Priorities’ ad hoc database, which contained 857 sources at the time of our pre-registration and had been generated incidentally through the daily work of researchers at Rethink Priorities and The Humane League Labs over the course of more than 5 years.

We then retrieved gray literature via a non-exhaustive search of 14 nonprofit websites,[4] identifying 109 Sources, and searched Rethink Priorities’ ad hoc database. This database contained 857 Sources at the time of our pre-registration and had been generated incidentally through the daily work of researchers at Rethink Priorities and The Humane League Labs over the course of more than 5 years.

Initial screening

Two independent reviewers screened the title and abstract of each candidate source against our eligibility criteria, erring on the side of including sources when information was missing or ambiguous. When a source passed title and abstract screening, a reviewer screened the full text against our eligibility criteria. If that reviewer determined the source was ineligible, a second reviewer was assigned to conduct a secondary full-text screening.

When the source was not in English, we used Google Translate for titles and abstracts, and committed to full-text screening if we had resources to translate. Ultimately, we left two full-text sources untranslated due to resource constraints on translation.

Iterative searching and screening

After 154 initial sources passed our screening protocol, we conducted forward and backward citation searching. That is, for a given eligible source, we identified every paper that it cited (backward citations), and up to 100 papers that cited it (forward citations). These searches occurred on April 16, 2024, returning a further 7,411 candidate sources. These sources were screened following the same procedure as the initial sources. As expected, a smaller proportion of sources from the forward and backward search passed screening than initial sources.In summary, we title and abstract screened approximately 7,000 sources (14,000 screenings) and full-text screened approximately 1,200 sources.

Deviations from our pre-registration

We made two significant deviations from our pre-registered protocol. First, to maximize our coverage of the eligible sources, we had originally intended to snowball sample our forward and backward searching, running the process iteratively until we were no longer returning new eligible sources. In practice, we stopped after one round of forward and backward searching due to resource limitations. Second, we had originally intended to code characteristics of studies and comparators, which we had defined as the experiments within a source and the arms within a study, respectively. In practice, we ceased work on this project before coding studies and comparators.

We also deviated from the pre-registration (Peacock et al. 2024) in several minor ways:

  • We had originally committed to using review software such as ASReview or DistillerSR, but ultimately managed screening in the database software Grist (https://www.getgrist.com/).
  • We encountered a number of clearly irrelevant books after our forward and backward citation searching (e.g., Nudge, Thaler and Sunstein, 2008; Fellow Creatures, Korsgaard, 2018). We adapted our protocol to permit exclusion if the title and the reviewers’ contextual knowledge suggested the source was clearly unlikely to report on an eligible study.
  • We encountered cases where one reviewer had passed sources through title and abstract screening when the source was in a foreign language, without translating. When translated, it was clear that the source would fail title and abstract screening. Our protocol committed us to full-text screening these sources, but we decided to have a third title-and-abstract screener translate and screen the sources instead..
  • We replaced the term “Edible Animal Product Usage (EAPU)” with “MAP consumption,” although this was not substantive.

Results

Our database contains 413 sources (see Figure 1). After deduplication, we had identified 921 initial candidate sources. We identified a further 7,411 candidate sources from forward and backward searching, from which we excluded 1,360 duplicates with our initial seed set. As such, we identified in sum 8,630 unique candidate sources. We title and abstract screened 6,869 sources. 1,247 were eligible for full-text screening; of those, we retrieved 1,211 sources and failed to retrieve or completely screen 36 sources. We did not complete coding for the 413 eligible sources and, as such, are not in a position to comment on findings in detail.

Figure 1: PRISMA flow diagram for including sources to review

What you can do with this database

Due to the large number of eligible sources, we did not complete the originally planned coding of studies. This means many future directions using this database remain available including meta-analysis and more targeted literature reviews. We welcome other researchers to extend and enrich this database in any way they see fit. Before undertaking work, it may be worthwhile to contact Sleegers et al. (2025) and Green et al. (2024) to avoid duplication of work.

Completing the coding

We defined sources as publications, studies as experiments reported in sources, and comparators as arms within studies. Comparators might be “before” and “after” conditions in a within–subjects design, or “treatment” and “control” conditions in a between–subjects design.[5] Although we had set out to code each source, study, and comparator, we stopped work before completing this task for most studies. Interested researchers could continue and complete the coding.

To illustrate the relationship between source, studies, and comparators, take Piernas et al. (2021) as an example. The authors were investigating an intervention to affect meat sales and/or meat-free sales in supermarkets.

  • They observed weekly meat sales at “108 stores: 20 intervention stores that moved a selection of 26 meat-free products into a newly created meat-free bay within the meat aisle and 88 matched control stores.” This comprises one study with two comparators: intervention stores and control stores.
  • Subsequently, 12 intervention stores maintained the same meat-free bay, while eight intensified the intervention by bringing in more meat-free products into the bay. This comprises a second study with two comparators.

Table 4: Example to illustrate database schema.

SourceStudyComparator
Estimating the effect of moving meat-free products to the meat aisle on sales of meat and meat-free products: A non-randomised controlled intervention study in a large UK supermarket chain

(Piernas et al., 2021)

Phase I20 intervention stores, with meat-free bays
88 control stores, with no meat-free bays
Phase II8 intervention stores, with ‘intensified’ meat-free bays
12 control stores, maintaining original meat-free bays

Where sources contained at least one relevant study and at least one irrelevant study (e.g., Venema et al., 2020), we proposed coding only relevant studies. In our database schema, sources and studies would have a many–to–many relationship, because some sources reported on multiple studies, and in at least one case, a study was reported in multiple sources. Studies and comparators had a one–to–many relationship (because, definitionally, each study must contain at least two comparators).

There is a wide range of data to extract, and interested researchers can consult our codebook to see everything we intended to code. Broadly, we had been interested in coding:

  • For studies, information about the research design, especially information relevant to judging the risk of bias. This would have included whether the study used a between–subjects and/or within–subjects design, how the study randomized or otherwise assigned participants to comparators, any information about “missing data” (e.g., failure to complete a post-intervention survey), any information about social desirability bias, and any information about compensatory behaviour (e.g., eating more meat at home after participating in meat-free Mondays at work).
  • For studies, information about eligible outcome measurements (including a detailed prose description of them); a categorical indicating whether those measures are direct (i.e., objective measurement), hypothetical, or self-reported; a categorical indicating whether the affected MAPs are General (e.g., “meat”), Large (e.g., beef, pork), Small (e.g., chicken, fish), and/or Specific (i.e., explicitly identified animal species)[6]; and any information about the lag between treatment and measurement.
  • For comparators, information about the intervention design, including a categorical indicating whether the intervention affects architecture (i.e., adjusting the physical environment in which participants make decisions), information provision (i.e., increasing participants’ knowledge or skills to motivate them to reduce MAP consumption), or other. We also proposed a subcategory schema (e.g., classifying each choice architecture comparator as availability, labeling, menu-based, default, serving size, or analogue).
  • For comparators, the result of the outcome measurement.

Conducting literature reviews

The whole database, or select subsets of it, could be utilized for further literature reviews and meta-analyses. This would assume researchers have coded relevant information about studies and comparators. We had proposed running focused literature reviews on classroom interventions, increasing the availability of plant-based options, and menu design interventions. However, we believe other insightful literature reviews could be drawn from this database, and researchers or advocates may wish to pursue reviews that we had not thought to prioritise.

Acknowledgments

This post is a project of Rethink Priorities—a think tank dedicated to informing decisions made by high-impact organizations and funders across various cause areas.

We would like to thank the many volunteers who helped screen sources for eligibility (listed alphabetically by surname): Hasan Alparslan Bayrak, Elena Bräu, AnnaLise Hoopes, Steven Mai, Emily MacNintch, Dung Nguyen, John Nyabwari Nyambane, Tapinder Sadu, Elena Schaller, Timea Tarczy, and Sofia Estefania Vera Verduzco.

Thanks to William McAuliffe for managing this summary of the project, Shane Coburn for copyediting, and Urszula Zarosa for assistance with publishing the report online and dissemination.

If you are interested in Rethink Priorities’ work, please visit our research database and subscribe to our newsletter.

Conflicts of interest

Ben Stevenson, Maya B. Mathur, Jessica E. Hope, Julia Fabienne Sandkühler, and Constanza Arévalo declare no conflicts of interest. Jacob R. Peacock is a trustee at Food Systems Research Fund (fsrfund.org), advisor at New Roots Institute (newrootsinstitute.org), and board member at Animal Charity Evaluators (animalcharityevaluators.org). Joanna E. Anderson is a board member at Secondhand Stories (secondhandstories.ca). These organizations were in no way involved in this paper.

Bibliography

Bianchi, F., Dorsel, C., Garnett, E., Aveyard, P., & Jebb, S. A. (2018). Interventions targeting conscious determinants of human behaviour to reduce the demand for meat: A systematic review with qualitative comparative analysis. International Journal of Behavioral Nutrition and Physical Activity, 15(1), 102. https://doi.org/10.1186/s12966-018-0729-6

Bianchi, F., Garnett, E., Dorsel, C., Aveyard, P., & Jebb, S. A. (2018). Restructuring physical micro-environments to reduce the demand for meat: a systematic review and qualitative comparative analysis. The Lancet Planetary Health, 2(9), e384–e397. https://doi.org/10.1016/S2542-5196(18)30188-8

Byerly, H., Balmford, A., Ferraro, P. J., Wagner, C. H., Palchak, E., Polansky, S., Ricketts, T. H., Schwartz, A. J., & Fisher, B. (2018). Nudging pro-environmental behavior: evidence and opportunities. Frontiers in Ecology and the Environment. https://doi.org/10.1002/fee.1777

Chang, K. B., Wooden, A., Rosman, L., Altema-Johnson, D., & Ramsing, R. (2023). Strategies for reducing meat consumption within college and university settings: A systematic review and meta-analysis. Frontiers in Sustainable Food Systems, 7. https://doi.org/10.3389/fsufs.2023.1103060

Garnett, E. (2021). The steaks are high: Reducing meat consumption by changing physical and economic environments to increase vegetarian sales [Apollo – University of Cambridge Repository]. https://doi.org/10.17863/CAM.63644

Di Gennaro, G., Licata, F., Pujia, A., Montalcini, T., & Bianco, A. (2024). How may we effectively motivate people to reduce the consumption of meat? Results of a meta-analysis of randomized clinical trials. Preventive Medicine, 184, 108007. https://doi.org/10.1016/j.ypmed.2024.108007

Green, S. A., Smith, A., & Mathur, M. B. (2025). Meaningfully reducing consumption of meat and animal products is an unsolved problem: A meta-analysis. https://doi.org/10.31219/osf.io/q6xyr_v4

Greig, K. (2017). Leafleting Intervention Report. Animal Charity Evaluators. https://animalcharityevaluators.org/wp-content/uploads/2022/04/leafleting-intervention-report.pdf

Grundy, E. A. C., Slattery, P., Saeri, A. K., Watkins, K., Houlden, T., Farr, N., Askin, H., Lee, J., Mintoft-Jones, A., Cyna, S., Dziegielewski, A., Gelber, R., Rowe, A., Mathur, M. B., Timmons, S., Zhao, K., Wilks, M., Peacock, J. R., Harris, J., … Zorker, M. (2022). Interventions that influence animal-product consumption: A meta-review. Future Foods, 5, Article 100111. https://doi.org/10.1016/j.fufo.2021.100111

Harguess, J. M., Crespo, N. C., & Yong Hong, M. (2020). Strategies to reduce meat consumption: A systematic literature review of experimental studies. Appetite, 144, Article 104478. https://doi.org/10.1016/j.appet.2019.104478

Hartmann, C., & Siegrist, M. (2017). Consumer perception and behaviour regarding sustainable protein consumption: A systematic review. Trends in Food Science & Technology, 61. https://doi.org/10.1016/j.tifs.2016.12.006

Korsgaard, C. (2018). Fellow creatures: Our obligations to the other animals. Oxford University Press.

Kwasny, T., Dobernig, K., & Riefler, P. (2022). Towards reduced meat consumption: A systematic literature review of intervention effectiveness, 2001-2019. Appetite, 168, 105739. https://doi.org/10.1016/j.appet.2021.105739

Sleegers, W., Jaeger, B., & van Aert, R. (2025). Library of Interventions for Meat Elimination. https://meat-lime.vercel.app/

Mathur, M. B., Peacock, J., Reichling, D. B., Nadler, J., Bain, P. A., Gardner, C. D., & Robinson, T. N. (2021). Interventions to reduce meat consumption by appealing to animal welfare: Meta-analysis and evidence-based recommendations. Appetite, 164, 105277. https://doi.org/10.1016/j.appet.2021.105277

Mathur, M. B. (2022, March 31). Ethical drawbacks of sustainable meat choices. https://osf.io/nkyqd

Meier, J., Andor, M. A., Doebbe, F. C., Haddaway, N. R., & Reisch, L. A. (2022) Review: Do green defaults reduce meat consumption? Food Policy, 110, Article 102298. https://doi.org/10.1016/j.foodpol.2022.102298

Nisa, C. (2018). Low impact of interventions to promote action on climate change: Meta-analysis with 3M observations. Nature Communications, forthcoming. https://dx.doi.org/10.2139/ssrn.3254938

Peacock, J., Mathur, M., Hope, J. E., Stevenson, B., Arévalo, C., Mendez, S., Anderson, J. (2024). Scoping Review of Interventions to Reduce Edible Animal Product Usage. https://osf.io/dnu58/

Peacock, J. (2018). Measuring Change in Diet for Animal Advocacy. The Humane League Labs. https://doi.org/10.17605/OSF.IO/8ZQC3

Piernas, C., Cook, B., Stevens, R., Stewart, C., Hollowell, J., Scarborough, P., & Jebb, S. A. (2021). Estimating the effect of moving meat-free products to the meat aisle on sales of meat and meat-free products: A non-randomised controlled intervention study in a large UK supermarket chain. PLOS Medicine. https://doi.org/10.1371/journal.pmed.1003715

Reynolds, A. N., Mhurchu, C. N., Kok, Z-Y., & Cleghorn, C. (2023). The neglected potential of red and processed meat replacement with alternative protein sources: simulation modelling and systematic review. eClinicalMedicine, 56, Article 101774. https://doi.org/10.1016/j.eclinm.2022.101774

Riva, J. J., Malik, K. M., Burnie, S. J., Endicott, A. R., & Busse, J. W. (2012). What is your research question? An introduction to the PICOT format for clinicians. The Journal of the Canadian Chiropractic Association, 56(3), 167–171.

Sanchez-Sabate, R., & Sabaté, J. (2019). Consumer Attitudes Towards Environmental Concerns of Meat Consumption: A Systematic Review. International Journal of Environmental Research and Public Health, 16(7), 1220. https://doi.org/10.3390/ijerph16071220

Taufik, D., Verain, M. C. D., Bouwman, E. P., & Reinders, M. J. (2019). Determinants of real-life behavioural interventions to stimulate more plant-based and less animal-based diets: A systematic review. Trends in Food Science & Technology, 93. https://doi.org/10.1016/j.tifs.2019.09.019

Thaler, R., & Sunstein, C. (2008). Nudge: Improving decisions about health, wealth, and happiness. Yale University Press.

Veul, J. (2018). Interventions to reduce meat consumption in OECD countries: An understanding of differences in success (Master’s thesis). Radboud University Nijmegen. https://theses.ubn.ru.nl/handle/123456789/6391

Venema, T. A. G., Kroese, F. M., Benjamins, J. S., & de Ridder, D. T. D. (2020). When in Doubt, Follow the Crowd? Responsiveness to Social Proof Nudges in the Absence of Clear Preferences. Frontiers in Psychology, 11, 1385. https://doi.org/10.3389/fpsyg.2020.01385

Weikertová, S., & Urban, J. (2023, February 5). Dynamic Norms Do Not Promote Vegetarian Food Preferences: Two Experiments and a Meta-Analysis. https://doi.org/10.31234/osf.io/qfn6y

  1. https://airtable.com/appZIacA2g13FirQo/shr1XEmaDCzZIXn65/tblua5yM9VjuEiyUU/viwDwgBzvhfW6P0Gp
  2. Willem Sleegers is our former colleague at Rethink Priorities. Seth Ariel Green is a colleague of Jacob R. Peacock at Stanford University’s Humane and Sustainable Food Lab.
  3. Note that these criteria exclude (a) interventions designed to increase MAP consumption, and (b) interventions that changed the relative price of MAPs and non-MAPs.
  4. Animal Charity Evaluators (animalcharityevaluators.org/research/), Animal Equality (animalequality.org/), Better Buying Lab (wri.org/initiatives/better-buying-lab), Compassion in World Farming (ciwf.com/research/), Faunalytics (faunalytics.org/completed-projects/), Good Food Institute (gfi.org/), Johns Hopkins Center for a Livable Future (clf.jhsph.edu/), Meatless Monday (mondaycampaigns.org/meatless-monday/research), Mercy for Animals (mercyforanimals.org/), Rethink Priorities (rethinkpriorities.org/research), Sentience Institute (sentienceinstitute.org/research), The Humane League Labs (thehumaneleague.org/research-reports), Wellbeing International Animal Studies Repository (wellbeingintlstudiesrepository.org/), and World Resources Institute (wri.org/research).
  5. Where studies reported on pre- and post-intervention conditions for both a treatment and control group, we only intended to record two comparators.
  6. One of the biggest challenges in translating context-specific studies into robust asks for meat reduction campaigns is the small–bodied animal problem (Mathur, 2022). Coding which animal species are affected by a given intervention is an important prerequisite for tackling this issue. Faunalytics, with the support of Bryant Research, is preparing a forthcoming meta-analysis of evidence on the small-bodied animal problem.