Search
Results
Ditch “Statistical Significance” — But Keep Statistical Evidence | by Eric J. Daza, DrPH, MPS | Towards Data Science
“significant” p-value ≠ “significant” finding: The significance of statistical evidence for the true X (i.e., statistical significance of the p-value for the estimate of the true X) says absolutely nothing about the practical/scientific significance of the true X. That is, significance of evidence is not evidence of significance. Increasing your sample size in no way increases the practical/scientific significance of your practical/scientific hypothesis. “significant” p-value = “discernible” finding: The significance of statistical evidence for the true X does tell us how well the estimate can discern the true X. That is, significance of evidence is evidence of discernibility. Increasing your sample size does increase how well your finding can discern your practical/scientific hypothesis.
Comparing Two Types of Online Survey Samples - Pew Research Center Methods | Pew Research Center
Opt-in samples are about half as accurate as probability-based panels
Measures | Science of Behavior Change
A Guide to Complexity-Aware Monitoring Approaches for MOMENTUM Projects - USAID MOMENTUM
User-Feedback Requests: 5 Guidelines
IndiKit - Guidance on SMART Indicators for Relief and Development Projects | IndiKit
How to Conduct a Cognitive Walkthrough Workshop
A cognitive walkthrough is a technique used to evaluate the learnability of a system. Unlike user testing, it does not involve users (and, thus, it can be relatively cheap to implement). Like heuristic evaluations, expert reviews, and PURE evaluations, it relies on the expertise of a set of reviewers to assess the interface. Although cognitive walkthroughs can be conducted by an individual, they are designed to be done as part of a group in a workshop setting where evaluators walk through a task in a highly structured manner from a new user’s point of view.
Meaningless Measurement – johnJsills
Broadly, these feedback surveys can be categorised into five groups: the pointless; the self-important; the immoral; the demanding; and the downright weird:
EMERGE – Evidence-based Measures of Empowerment for Research on Gender Equality – UC SAN DIEGO
EMERGE (Evidence-based Measures of Empowerment for Research on Gender Equality) is a project focused on gender equality and empowerment measures to monitor and evaluate health programs and to track progress on UN Sustainable Development Goal (SDG) 5: To Achieve Gender Equality and Empower All Girls. As reported by UN Women (2018), only 2 of the 14 SDG 5 indicators have accepted methodologies for measurement and data widely available. Of the remaining 12, 9 are indicators for which data are collected and available in only a limited number of countries. This assessment suggests notable measurement gaps in the state of gender equality and empowerment worldwide. EMERGE aims to improve the science of gender equality and empowerment measurement by identifying these gaps through the compilation and psychometric evaluation of available measures and supporting scientifically rigorous measure development research in India.
Meta-Analysis Learning Information Center
The Meta-Analysis Learning Information Center (MALIC) believes in equitably providing cutting-edge and up-to-date techniques in meta-analysis to researchers in the social sciences, particularly those in education and STEM education.
Net Promoter Score Considered Harmful (and What UX Professionals Can Do About It) | by Jared M. Spool | Noteworthy - The Journal Blog
Better evaluations
We Analyzed 2,810 Profiles to Calculate Facebook Engagement Rate
Just-in-Time Adaptive Interventions and Adaptive Interventions – The Methodology Center
Facilitation Guide for an Integrated Evaluation Methodology: Most Significant Change and PhotoVoice | Health Social Change and Behaviour Change Network
Home | Better Evaluation
We are a global collaboration aimed at improving evaluation practice and theory through co-creation, curation, and sharing information.
2020-06-01 - The UX Research You’ll Need to Confidently Choose Your UX Metrics
A Simple Framework for Testing Your Social Media Ideas (+ 87 Ideas)
Evaluate Impact – Integrated SBCC Programs
Are We There Yet? A Communications Evaluation Guide
22 Tips for Building Meaningful Social Media Dashboards from All Networks | Databox Blog
Evaluating digital health products - GOV.UK
Evaluating Effect Size in Psychological Research: Sense and Nonsense - David C. Funder, Daniel J. Ozer, 2019
Daniel J. O’Keefe PUBLICATIONS AND PAPERS
research on health comm messaging effects
Message Pretesting Using Perceived Persuasiveness Measures: Reconsidering the Correlational Evidence: Communication Methods and Measures: Vol 0, No 0
Social media analytics: A practical guidebook for journalists and other media professionals | Publications | DW Akademie | DW | 17.07.2019
This guidebook helps media professionals of small media houses develop a better understanding of how to use data for improving their social media performance. Also includes worksheets and templates.
Social and Behavior Change Monitoring Guidance | Breakthrough ACTION and RESEARCH
Breakthrough ACTION has distilled guidance on social and behavior change (SBC) monitoring methods into a collection of technical notes. Each note provides an overview of a monitoring method that may be used for SBC programs along with a description of when to use the method and its strengths and weaknesses.
Understanding how and why people change - Journal of Marketing Management
We applied a Hidden Markov Model* (see Figure 1) to examine how and why behaviours did or did not change. The longitudinal repeated measure design meant we knew about food waste behaviour at two points (the amount of food wasted before and after the program), changes in the amount of food wasted reported over time for each household (more or less food wasted) and other factors (e.g. self-efficacy). By using a new method we could extend our understanding beyond the overall effect (households in the Waste Not Want Not program group wasted less food after participating when compared to the control group).
Design and statistical considerations in the evaluation of digital behaviour change interventions | UCL CBC Digi-Hub Blog
NetworkWeaver - Weaving Smart Networks
resources for mapping, assessing and weaving networks
Measuring Program Outcomes: A Practical Approach - United Way of America
Guide to outcome evaluation and development of logic models
Null results should produce answers, not excuses — R&E Search for Evidence
Mobile App Rating Scale: A New Tool for Assessing the Quality of Health Mobile Apps
Behavioral Design: When to Fire a Cannon and When to Use a Precision Knife | Nicolae NAUMOF | LinkedIn
‘Nudge unit’ defies sceptics to change Whitehall thinking - FT.com
Whitepaper: 6 Models for Measuring the ROI of Social Media Marketing - Ignite Social Media
UNDERSTANDING METRICS Guides - Media Impact Project
Web Metrics, YouTube Basics and Mobile Metrics Guides