Search
Results
Introduction to Algorithms, 2021-2 Week 14: Notes
Introduction to Algorithms, 2020-1 Week 13: Notes
c - Algorithm for recursive evaluation of postfix expressions - Software Engineering Stack Exchange
CS 2 Program 5
Slides
c++ - Need Help Understanding Recursive Prefix Evaluator - Stack Overflow
Prefix Expression By Using Queue - YouTube
Will Your Nudge Have a Lasting Impact?
The context of Input-Process-Output-Outcome evaluation for assessing educational quality
Theory of Change Template | Miro
Technology Evaluation Canvas (TEC): การประเมินเทคโนโลยี เพื่อประกอบการขอทุน บพข. - บพข.
How Do We Know If We Have Transformed Narrative Oceans? | by Pop Culture Collaborative | Dec, 2023 | Medium
And the result is a new beta framework: INCITE — Inspiring Narrative Change Innovation through Tracking and Evaluation. This new learning and evaluation framework has been developed to equip the pop culture narrative change field — comprised of artists, values-aligned entertainment leaders and companies, movement leaders, cultural strategists, narrative researchers, philanthropic partners, and more — with a shared methodology to unearth learnings and track short and long-term impact, at both the individual and collective levels. This launch of the beta INCITE framework is the first step in a road testing process set to take place over 2024 to make it useful and usable by field members and funders alike.
2023 Social Media Industry Benchmark Report | Rival IQ
Planning Programs – Program Development and Evaluation
Learning and Evaluation/Logic models - Meta
Evaluation of California's Statewide Mental Health Campaigns | RAND
Ditch “Statistical Significance” — But Keep Statistical Evidence | by Eric J. Daza, DrPH, MPS | Towards Data Science
“significant” p-value ≠ “significant” finding: The significance of statistical evidence for the true X (i.e., statistical significance of the p-value for the estimate of the true X) says absolutely nothing about the practical/scientific significance of the true X. That is, significance of evidence is not evidence of significance. Increasing your sample size in no way increases the practical/scientific significance of your practical/scientific hypothesis. “significant” p-value = “discernible” finding: The significance of statistical evidence for the true X does tell us how well the estimate can discern the true X. That is, significance of evidence is evidence of discernibility. Increasing your sample size does increase how well your finding can discern your practical/scientific hypothesis.
Comparing Two Types of Online Survey Samples - Pew Research Center Methods | Pew Research Center
Opt-in samples are about half as accurate as probability-based panels
Conceptualizing, Embracing, and Measuring Failure in Social Marketing Practice - M Bilal Akbar, Liz Foote, Alison Lawson, 2023
While failure in social marketing practice represents an emerging research agenda, the discipline has not yet considered this concept systematically or cohesively. This lack of a clear conceptualization of failure in social marketing to aid practice thus presents a significant research gap.
Heuristic Evaluations: How to Conduct
Step-by-step instructions to systematically review your product to find potential usability and experience problems. Download a free heuristic evaluation template.
T-LEAF: Taxonomy Learning and EvaluAtion Framework | by Cen(Mia) Zhao | The Airbnb Tech Blog | Medium
Measures | Science of Behavior Change
Guide to evaluating behaviourally and culturally informed health interventions in complex settings
This framework proposes a stagewise model for evaluating the effectiveness and sustainability of behaviourally and culturally informed interventions in complex settings, with detailed guidance and accompanying tools. It presents the theoretical background, addresses the challenges of assessing causality during times of change and of influencing factors, and provides a method for measuring the unintended positive and negative effects of interventions on well-being, trust and social cohesion.
A Guide to Complexity-Aware Monitoring Approaches for MOMENTUM Projects - USAID MOMENTUM
User-Feedback Requests: 5 Guidelines
IndiKit - Guidance on SMART Indicators for Relief and Development Projects | IndiKit
Evaluating expectations from social and behavioral science about COVID-19 and lessons for the next pandemic
Social and behavioral science research proliferated during the COVID-19 pandemic, reflecting the substantial increase in influence of behavioral science in public health and public policy more broadly. This review presents a comprehensive assessment of 742 scientific articles on human behavior during COVID-19. Two independent teams evaluated 19 substantive policy recommendations (“claims”) on potentially critical aspects of behaviors during the pandemic drawn from the most widely cited behavioral science papers on COVID-19. Teams were made up of original authors and an independent team, all of whom were blinded to other team member reviews throughout. Both teams found evidence in support of 16 of the claims; for two claims, teams found only null evidence; and for no claims did the teams find evidence of effects in the opposite direction. One claim had no evidence available to assess. Seemingly due to the risks of the pandemic, most studies were limited to surveys, highlighting a need for more investment in field research and behavioral validation studies. The strongest findings indicate interventions that combat misinformation and polarization, and to utilize effective forms of messaging that engage trusted leaders and emphasize positive social norms.
Theory of Change Workbook: A Step-by-Step Process for Developing or Strengthening Theories of Change | Eval Forward
The Art of Storytelling for Case Studies | by Ingrid Elias | Indeed Design
Nudged off a cliff - by Stuart Ritchie - Science Fictions
A recent meta-analysis looked like good news for the effectiveness of “nudge“ theory. Does a new set of rebuttal letters throw the whole idea into doubt?
View of The Development of Effective and Tailored Digital Behavior Change Interventions: An introduction to the Multiphase Optimization Strategy (MOST)
QUT - MOPP - A/2.4 QUT Quality and Standards Framework
ADRI Quality Cycle
How to Conduct a Cognitive Walkthrough Workshop
A cognitive walkthrough is a technique used to evaluate the learnability of a system. Unlike user testing, it does not involve users (and, thus, it can be relatively cheap to implement). Like heuristic evaluations, expert reviews, and PURE evaluations, it relies on the expertise of a set of reviewers to assess the interface. Although cognitive walkthroughs can be conducted by an individual, they are designed to be done as part of a group in a workshop setting where evaluators walk through a task in a highly structured manner from a new user’s point of view.
Behavior Change Impact – Evidence Database of Social and Behavior Change Impact
Research consistently shows evidence-based social and behavior change (SBC) programs can increase knowledge, shift attitudes and norms and produce changes in a wide variety of behaviors. SBC has proven effective in several health areas, such as increasing the uptake of family planning methods, condom use for HIV prevention, and care-seeking for malaria. Between 2017 and 2019, a series of comprehensive literature reviews were conducted to consolidate evidence that shows the positive impact of SBC interventions on behavioral outcomes related to family planning, HIV, malaria, reproductive empowerment, and the reproductive health of urban youth in low- and middle-income countries. The result is five health area-specific databases that support evidence-based SBC. The databases are searchable by keyword, country, study design, intervention and behavior. The databases extract intervention details, research methodologies and results to facilitate searching. For each of the five health areas, a “Featured Evidence” section highlights a list of key articles demonstrating impact.
Meaningless Measurement – johnJsills
Broadly, these feedback surveys can be categorised into five groups: the pointless; the self-important; the immoral; the demanding; and the downright weird:
EMERGE – Evidence-based Measures of Empowerment for Research on Gender Equality – UC SAN DIEGO
EMERGE (Evidence-based Measures of Empowerment for Research on Gender Equality) is a project focused on gender equality and empowerment measures to monitor and evaluate health programs and to track progress on UN Sustainable Development Goal (SDG) 5: To Achieve Gender Equality and Empower All Girls. As reported by UN Women (2018), only 2 of the 14 SDG 5 indicators have accepted methodologies for measurement and data widely available. Of the remaining 12, 9 are indicators for which data are collected and available in only a limited number of countries. This assessment suggests notable measurement gaps in the state of gender equality and empowerment worldwide. EMERGE aims to improve the science of gender equality and empowerment measurement by identifying these gaps through the compilation and psychometric evaluation of available measures and supporting scientifically rigorous measure development research in India.
Meta-Analysis Learning Information Center
The Meta-Analysis Learning Information Center (MALIC) believes in equitably providing cutting-edge and up-to-date techniques in meta-analysis to researchers in the social sciences, particularly those in education and STEM education.
Balancing short-term & long-term results > by Brooke Tully
Achieving sustained behavior change takes a long time. I mean, hell, we’re still running ads about buckling seat-belts and most states made it a law 35 years ago! Beyond achieving behavior change, seeing the positive impact of said change on species, habitats and ecosystems can take even longer. So how can we balance these longer term goals with the need to show more immediate outcomes?
How Nonprofits Practice Continuous Improvement | Beth Kanter
The evaluation process – Europlanet Society
Mobile app validation: a digital health scorecard approach | npj Digital Medicine
Net Promoter Score Considered Harmful (and What UX Professionals Can Do About It) | by Jared M. Spool | Noteworthy - The Journal Blog
Guidelines for Costing of Social and Behavior Change Health Interventions
Costing is the process of data collection and analysis for estimating the cost of a health intervention. High-quality cost data on SBC are critical not only for developing budgets, planning, and assessing program proposals, but can also feed into advocacy, program prioritization, and agenda setting. To better serve these data needs, these guidelines aim to increase the quantity and quality of SBC costing information. By encouraging cost analysts to use a standardized approach based on widely accepted methodological principles, we expect the SBC Costing Guidelines to result in well-designed studies that measure cost at the outset, to allow assessment of cost-effectiveness and benefit-cost ratios1 for SBC programming. Such analyses could also potentially help advocates for SBC to better make the case for greater investment in SBC programming.2 These guidelines lay out a consistent set of methodological principles that reflect best practice and that can underpin any SBC costing effort.
All that glitters is not gold - 8 ways behaviour change can fail
Before we dive in, here is a quick summary of the proposed taxonomy of behaviour change failures: No effect Backfiring Intervention is effective but it's offset by a negative side effect Intervention isn't effective but there's a positive side effect A proxy measure changes but not the ultimate target behaviour Successful treatment effect offset by later (bad) behaviour Environment doesn't support the desired behaviour change Intervention triggers counteracting forces
Better evaluations
Learning from Behavioural Changes That Fail: Trends in Cognitive Sciences
The behavioural change enterprise disproportionately focuses on promoting successes at the expense of examining the failures of behavioural change interventions. We review the literature across different fields through a causal explanatory approach to identify structural relations that impede (or promote) the success of interventions. Based on this analysis we present a taxonomy of failures of behavioural change that catalogues different types of failures and backfiring effects. Our analyses and classification offer guidance for practitioners and researchers alike, and provide critical insights for establishing a more robust foundation for evidence-based policy. Behavioural change techniques are currently used by many global organisations and public institutions. The amassing evidence base is used to answer practical and scientific questions regarding what cognitive, affective, and environment factors lead to successful behavioural change in the laboratory and in the field. In this piece we show that there is also value to examining interventions that inadvertently fail in achieving their desired behavioural change (e.g., backfiring effects). We identify the underlying causal pathways that characterise different types of failure, and show how a taxonomy of causal interactions that result in failure exposes new insights that can advance theory and practice.