🔬 For Researchers & Academics

Evidence Synthesis & Knowledge Production Framework

How researchers can generate, validate, and translate knowledge that accelerates action on child nutrition across disciplines, geographies, and implementation contexts

The Research-Action Gap

Academic research produces thousands of nutrition studies annually. Most sit in paywalled journals, written in technical language, disconnected from implementation contexts. Meanwhile, practitioners make urgent decisions with limited evidence, and policymakers cite outdated or cherry-picked studies.

The result: A knowledge production system optimized for academic advancement rather than real-world impact. Rigorous research that never influences action. Interventions scaled without evidence. Resources misallocated due to information asymmetries.

"The question is not 'Is this publishable?' but rather 'Will this knowledge change how resources flow to children?'"

This framework repositions research as a public good for coordination rather than a credential for individual advancement. It provides guidance for researchers who want their work to matter beyond citation counts.

Evidence Quality Framework

Not all evidence is created equal. This hierarchy guides both evidence generation and evidence interpretation. Higher levels provide stronger basis for action, but all levels have value when properly contextualized.

Level 1: Systematic Reviews & Meta-Analyses

Highest Quality

What: Comprehensive synthesis of all available evidence on a specific question, with quantitative pooling (meta-analysis) when appropriate.
Strength: Reduces bias from single studies, increases statistical power, identifies consistency across contexts.
Limitation: Quality depends on underlying studies; may mask important heterogeneity; retrospective.

Level 2: Randomized Controlled Trials (RCTs)

High Quality

What: Experimental studies with random assignment to intervention and control groups, measuring causal effects.
Strength: Strongest causal inference; controls for confounding; replicable methodology.
Limitation: Expensive; may have limited external validity; ethical constraints in some contexts; artificial conditions.

Level 3: Quasi-Experimental Studies

Moderate Quality

What: Non-randomized studies using comparison groups, natural experiments, regression discontinuity, difference-in-differences, or propensity score matching.
Strength: More feasible than RCTs; can leverage existing variation; better external validity.
Limitation: Weaker causal claims; vulnerable to selection bias; requires careful design and analysis.

Level 4: Observational & Descriptive Studies

Foundational

What: Cross-sectional surveys, cohort studies, case-control studies, prevalence studies, or descriptive analysis without experimental manipulation.
Strength: Can study rare outcomes; real-world conditions; large sample sizes; less expensive.
Limitation: Cannot establish causation; confounding factors; limited ability to inform intervention design.

Level 5: Expert Opinion & Case Studies

Contextual

What: Professional consensus, clinical experience, qualitative research, implementation case studies, or theoretical frameworks.
Strength: Captures nuance; generates hypotheses; rapid feedback; contextual understanding.
Limitation: Subjective; not generalizable; vulnerable to bias; should not guide major resource allocation alone.

💡 Using the Hierarchy Appropriately

Lower-quality evidence still has value when:

  • Higher-quality evidence doesn't exist yet (new interventions, emerging contexts)
  • Ethical constraints prevent experimental designs
  • Urgent decisions require immediate action with available information
  • Qualitative insights reveal mechanisms not captured by quantitative studies

The key: Be transparent about evidence quality. Don't overstate certainty. Clearly distinguish correlation from causation.

High-Priority Research Gaps

These questions represent critical knowledge gaps where new research could significantly influence resource allocation, policy design, or implementation effectiveness. Prioritized by potential impact, feasibility, and current evidence quality.

Cost-Effectiveness Across Contexts

Question: How do intervention cost-effectiveness ratios vary across income levels, geographies, and implementation partners?
Why: Most CE analyses are context-specific. Funders need generalizable frameworks for allocation decisions.

High Urgency

Supply Chain Optimization

Question: What distribution models minimize cost and spoilage for therapeutic foods in low-infrastructure settings?
Why: Logistics often account for 40-60% of program costs. Small improvements generate massive savings.

High Urgency

Behavior Change Mechanisms

Question: Which behavioral interventions durably change infant and young child feeding practices?
Why: Knowledge transfer doesn't guarantee practice change. Understanding mechanisms enables better program design.

Medium Urgency

Long-Term Outcomes

Question: What are the cognitive, economic, and health trajectories of children who received early nutrition interventions?
Why: Most studies measure short-term anthropometrics. Long-term impacts justify investment and inform program design.

Medium Urgency

Multi-Sectoral Synergies

Question: How do nutrition interventions interact with health, education, and WASH programs?
Why: Programs operate in silos. Understanding complementarities enables better coordination.

Ongoing

Emergency Response Effectiveness

Question: What pre-positioning strategies minimize mortality in acute nutrition crises?
Why: Crises are increasing. Evidence on rapid response could save thousands of lives annually.

High Urgency
How to Prioritize Your Research

High-impact research typically has three characteristics:

  1. Decision relevance: Answers a question that practitioners, funders, or policymakers are actively facing
  2. Actionability: Results can be translated into specific operational changes
  3. Scalability: Findings apply beyond a single organization or geography

Test: If your research succeeds, who will change their behavior and how? If you can't answer this, consider refocusing.

Cross-Domain Research Opportunities

Child nutrition sits at the intersection of multiple disciplines. The highest-leverage research often connects insights across traditional academic boundaries. These are areas ripe for synthesis.

🧬 Nutrition Science 💰 Economics
Opportunity: How do food price volatility, income shocks, and market structures affect micronutrient adequacy? Most nutrition studies ignore economic context; most economic studies use crude nutrition measures. Synthesis enables better policy design.
🏛️ Political Science 🏥 Public Health
Opportunity: Why do some governments prioritize child nutrition while others don't, despite similar resources? Political economy analysis can explain policy gaps that technical analysis misses.
🌾 Agricultural Systems 🧠 Behavioral Science
Opportunity: What drives farmer adoption of biofortified crops and consumer acceptance? Supply-side interventions fail without understanding demand-side psychology.
🌍 Climate Science 🍽️ Food Security
Opportunity: How do climate shocks cascade through food systems to affect child nutrition? Predictive models could enable anticipatory action before crises.
📊 Data Science 🚚 Supply Chain
Opportunity: Can real-time data and machine learning optimize nutrition supply chains? Most humanitarian logistics still rely on static planning models.

Methodology Selection Framework

Different research questions require different methods. This framework helps match methods to questions for maximum validity and impact.

Randomized Controlled Trials
Best for: Testing causal effects of discrete interventions with clear treatment and control groups
Strength: Internal validity, causal inference
Weakness: External validity, ethical constraints, cost
Natural Experiments
Best for: Leveraging policy changes, geographic boundaries, or timing variation to estimate effects
Strength: Real-world validity, feasibility
Weakness: Requires right setting, weaker causal claims
Implementation Science
Best for: Understanding how and why interventions succeed or fail in real-world settings
Strength: Actionable insights, contextual understanding
Weakness: Limited generalizability, requires process data
Systems Modeling
Best for: Exploring complex dynamics, feedback loops, and long-term trajectories in nutrition systems
Strength: Handles complexity, scenario analysis
Weakness: Model validity depends on assumptions, data quality
Mixed Methods
Best for: Combining quantitative impact measurement with qualitative mechanism exploration
Strength: Depth and breadth, triangulation
Weakness: Resource intensive, requires multiple skill sets
Replication Studies
Best for: Testing whether findings hold across contexts, populations, or implementation partners
Strength: Builds confidence, identifies boundary conditions
Weakness: Less novel, harder to publish, requires funding

Open Science & Replication Protocol

Science advances through replication, not individual studies. This protocol makes your research maximally useful for future researchers, practitioners, and meta-analysts.

1

Pre-Registration

Register your study design, hypotheses, and analysis plan before data collection.

Platforms: ClinicalTrials.gov (clinical studies), OSF Registries (social science), AsPredicted (economics). Include: research question, outcome measures, sample size calculation, statistical approach, and any exploratory analyses.

Why: Prevents p-hacking, selective reporting, and publication bias.

2

Data Collection Transparency

Document data collection procedures, instruments, and challenges in real time.

Include: survey instruments, training materials, quality control procedures, adverse events, protocol deviations, and response rates. Store in accessible repository.

Why: Enables replication and helps future researchers avoid your mistakes.

3

Data Sharing

Make de-identified data publicly available with clear documentation.

Repositories: Harvard Dataverse, Open Science Framework, Dryad, or domain-specific archives. Include codebook, data dictionary, and any code used for cleaning or analysis. Embargo period acceptable if necessary (typically 12-24 months).

Why: Enables meta-analysis, sensitivity testing, and new hypotheses.

4

Analysis Code Publication

Share all code used to generate results, figures, and tables.

Platforms: GitHub, GitLab, or OSF. Include: data cleaning scripts, statistical analysis code, visualization code, and computational environment specifications (package versions, software versions). Use version control.

Why: Ensures computational reproducibility and catches errors.

5

Plain-Language Summary

Write a 1-2 page summary accessible to non-specialists.

Include: research question in plain language, key findings without jargon, practical implications for implementers/policymakers, limitations and caveats, and directions for future research. Avoid academic hedging; be direct about what you learned.

Why: Bridges research-practice gap; increases real-world impact.

6

Null Results Publication

Publish null and negative results; don't file-drawer them.

Options: Traditional journals (increasingly accepting), PLOS ONE (publishes sound methodology regardless of results), Cochrane Database of Systematic Reviews, or preprint servers (arXiv, medRxiv, SocArXiv).

Why: Prevents wasted replication, reduces publication bias, advances knowledge.

⚠️ Data Sharing Ethics

Before sharing data, ensure:

  • Informed consent included permission for data sharing
  • All personally identifiable information removed
  • Small-cell suppression applied to prevent re-identification
  • Sensitive variables (health status, income, location) appropriately protected
  • Compliance with institutional IRB and local data protection laws

When in doubt, consult your IRB and prioritize participant privacy over data availability.

Research Ethics: Non-Negotiable Standards

Ethical research protects participants, produces valid knowledge, and maintains public trust. These principles apply regardless of funding source, publication venue, or career incentives.

Mandatory Ethical Requirements

  • Informed consent: Participants understand purpose, procedures, risks, benefits, and voluntary nature of participation. For vulnerable populations (children, illiterate, displaced), use appropriate consent procedures.
  • Privacy protection: Minimize data collection to essential variables. Use encryption, secure storage, and access controls. De-identify data before analysis. Never share identifiable data without explicit consent.
  • Equipoise in trials: RCTs are ethical only when genuine uncertainty exists about which intervention is superior. Don't randomize access to proven life-saving interventions.
  • Benefit sharing: Participants and communities should benefit from research, not just external researchers and institutions. Share findings, provide results, offer continued access to effective interventions.
  • Community engagement: Involve affected communities in research design, interpretation, and dissemination. Respect local knowledge and decision-making authority.
  • Conflict of interest disclosure: Declare all financial relationships, institutional affiliations, and potential biases. Funding sources should not influence study design or reporting.
  • Research integrity: No fabrication, falsification, or plagiarism. Report results honestly, including inconvenient findings. Correct errors promptly and publicly.
  • Authorship transparency: Credit all substantive intellectual contributors. No ghost authorship (uncredited writers) or gift authorship (undeserved credit).
  • Environmental responsibility: Minimize research footprint. Dispose of materials safely. Consider carbon costs of international travel and offsetting options.
  • Post-research obligations: Don't extract knowledge and leave. Capacity building, infrastructure sharing, and long-term partnerships demonstrate respect for host communities.

Open Science Principles

Science funded by public resources should produce public knowledge. These principles guide ethical and effective knowledge sharing.

🔓
Open Access Publishing
Publish in open-access journals or post preprints to public repositories. Knowledge behind paywalls cannot inform action in low-resource settings.
📊
Open Data
Share de-identified data in accessible repositories with clear documentation. Enable replication, meta-analysis, and new discoveries.
💻
Open Code
Publish analysis code, computational workflows, and software tools. Transparency enables error detection and methodological advancement.
📝
Open Protocols
Share study protocols, instruments, and materials before data collection. Pre-registration prevents selective reporting and publication bias.
🤝
Open Collaboration
Invite participation from diverse researchers, especially from low- and middle-income countries. Break down institutional and geographic silos.
🎓
Open Education
Share teaching materials, training resources, and capacity-building tools. Enable skill transfer and local research capacity development.

Research-to-Action Pathways

Academic publication is not the endpoint. These are concrete pathways for research to influence real-world decisions.

How Research Influences Action

Pathway 1: Direct Implementation Practitioners read your study and adopt your intervention. Requires: accessible writing, clear methods description, implementation guidance, and cost data.
Pathway 2: Evidence Synthesis Your study becomes part of systematic reviews and meta-analyses that inform guidelines. Requires: methodological rigor, data sharing, and engagement with review teams.
Pathway 3: Policy Influence Policymakers cite your work when designing programs or regulations. Requires: policy briefs, direct engagement with government, media coverage, and patience.
Pathway 4: Funding Allocation Foundations and donors use your cost-effectiveness analysis to prioritize interventions. Requires: transparent assumptions, sensitivity analysis, and comparable metrics.
Pathway 5: Methodological Advancement Other researchers adopt your methods or build on your theoretical framework. Requires: detailed methodology, replication materials, and engagement with research community.

Practical Steps to Increase Impact

  • Write policy briefs: 2-page summaries for non-specialists with clear recommendations
  • Engage stakeholders early: Involve practitioners and policymakers in research design
  • Present to diverse audiences: Not just academic conferences; also practitioner convenings, government workshops, foundation meetings
  • Cultivate media relationships: Help journalists understand and report your findings accurately
  • Create implementation resources: Toolkits, checklists, training materials that operationalize your findings
  • Track downstream use: Monitor how your work gets cited in program documents, policy papers, and funding decisions

Collaborative Research Models

The most impactful nutrition research often involves partnerships across institutions, disciplines, and sectors. These models balance intellectual contribution with operational constraints.

Partnership Structures

University-NGO Partnerships
Structure: Universities provide technical expertise and analysis; NGOs provide field access and implementation capacity
Key to success: Clear MOU on data ownership, authorship, timeline, and resource sharing. Respect operational constraints.
Multi-Country Consortia
Structure: Coordinated studies across multiple countries using harmonized protocols and pooled analysis
Key to success: Central coordination team, standardized instruments, regular communication, and equitable authorship.
Embedded Researchers
Structure: Researchers work within implementing organizations, conducting studies while supporting operations
Key to success: Dual accountability to research and operations, realistic timelines, and building organizational research capacity.
💡 When Partnerships Fail

Common failure modes:

  • Mismatched incentives (publication vs. program delivery)
  • Unclear data ownership and authorship from the start
  • Researchers impose unrealistic timelines on operations
  • Local partners treated as data collectors, not intellectual contributors
  • Research priorities don't align with organizational strategy

Prevention: Negotiate expectations explicitly upfront. Put agreements in writing. Build in flexibility. Share resources and credit generously.

Funding High-Impact Research

Research funding increasingly requires demonstrated relevance and stakeholder engagement. These principles strengthen proposals while maintaining scientific integrity.

Elements of Fundable Research

  • Clear decision relevance: Articulate exactly what decision-makers will do differently based on your findings
  • Stakeholder buy-in: Letters of support from implementers, policymakers, or communities who will use the results
  • Feasibility: Demonstrate access, partnerships, and capacity to execute in proposed timeline and budget
  • Dissemination plan: Concrete strategy for reaching non-academic audiences, not just "we will publish"
  • Capacity building: Especially for international work, show how you'll strengthen local research capacity
  • Cost-effectiveness: Justify budget. More expensive isn't better. Show value for money.

Major Nutrition Research Funders

  • Bill & Melinda Gates Foundation: Large grants, long timelines, focus on scalable solutions
  • USAID: Policy-relevant research in priority countries, U.S. institution preference
  • UNICEF: Operations research, evaluation, emergency nutrition
  • World Bank: Economic analysis, impact evaluation, systems strengthening
  • Wellcome Trust: Biomedical and implementation research, global health focus
  • NIH (Fogarty): Capacity building, training, U.S.-LMIC partnerships
  • UK Research Councils: GCRF, Newton Fund for international development research