Search
Results
Ditch “Statistical Significance” — But Keep Statistical Evidence | by Eric J. Daza, DrPH, MPS | Towards Data Science
“significant” p-value ≠ “significant” finding: The significance of statistical evidence for the true X (i.e., statistical significance of the p-value for the estimate of the true X) says absolutely nothing about the practical/scientific significance of the true X. That is, significance of evidence is not evidence of significance. Increasing your sample size in no way increases the practical/scientific significance of your practical/scientific hypothesis. “significant” p-value = “discernible” finding: The significance of statistical evidence for the true X does tell us how well the estimate can discern the true X. That is, significance of evidence is evidence of discernibility. Increasing your sample size does increase how well your finding can discern your practical/scientific hypothesis.
Comparing Two Types of Online Survey Samples - Pew Research Center Methods | Pew Research Center
Opt-in samples are about half as accurate as probability-based panels
IndiKit - Guidance on SMART Indicators for Relief and Development Projects | IndiKit
Meaningless Measurement – johnJsills
Broadly, these feedback surveys can be categorised into five groups: the pointless; the self-important; the immoral; the demanding; and the downright weird:
EMERGE – Evidence-based Measures of Empowerment for Research on Gender Equality – UC SAN DIEGO
EMERGE (Evidence-based Measures of Empowerment for Research on Gender Equality) is a project focused on gender equality and empowerment measures to monitor and evaluate health programs and to track progress on UN Sustainable Development Goal (SDG) 5: To Achieve Gender Equality and Empower All Girls. As reported by UN Women (2018), only 2 of the 14 SDG 5 indicators have accepted methodologies for measurement and data widely available. Of the remaining 12, 9 are indicators for which data are collected and available in only a limited number of countries. This assessment suggests notable measurement gaps in the state of gender equality and empowerment worldwide. EMERGE aims to improve the science of gender equality and empowerment measurement by identifying these gaps through the compilation and psychometric evaluation of available measures and supporting scientifically rigorous measure development research in India.
Meta-Analysis Learning Information Center
The Meta-Analysis Learning Information Center (MALIC) believes in equitably providing cutting-edge and up-to-date techniques in meta-analysis to researchers in the social sciences, particularly those in education and STEM education.
Net Promoter Score Considered Harmful (and What UX Professionals Can Do About It) | by Jared M. Spool | Noteworthy - The Journal Blog
We Analyzed 2,810 Profiles to Calculate Facebook Engagement Rate
Just-in-Time Adaptive Interventions and Adaptive Interventions – The Methodology Center
How to Analyze Instagram Stories: 7 Metrics to Track : Social Media Examiner
Evaluating digital health products - GOV.UK
How effective is nudging? A quantitative review on the effect sizes and limits of empirical nudging studies - ScienceDirect
Evaluating Effect Size in Psychological Research: Sense and Nonsense - David C. Funder, Daniel J. Ozer, 2019
Daniel J. O’Keefe PUBLICATIONS AND PAPERS
research on health comm messaging effects
Social and Behavior Change Monitoring Guidance | Breakthrough ACTION and RESEARCH
Breakthrough ACTION has distilled guidance on social and behavior change (SBC) monitoring methods into a collection of technical notes. Each note provides an overview of a monitoring method that may be used for SBC programs along with a description of when to use the method and its strengths and weaknesses.
Understanding how and why people change - Journal of Marketing Management
We applied a Hidden Markov Model* (see Figure 1) to examine how and why behaviours did or did not change. The longitudinal repeated measure design meant we knew about food waste behaviour at two points (the amount of food wasted before and after the program), changes in the amount of food wasted reported over time for each household (more or less food wasted) and other factors (e.g. self-efficacy). By using a new method we could extend our understanding beyond the overall effect (households in the Waste Not Want Not program group wasted less food after participating when compared to the control group).
Design and statistical considerations in the evaluation of digital behaviour change interventions | UCL CBC Digi-Hub Blog
Behavioral Design: When to Fire a Cannon and When to Use a Precision Knife | Nicolae NAUMOF | LinkedIn
UNDERSTANDING METRICS Guides - Media Impact Project
Web Metrics, YouTube Basics and Mobile Metrics Guides
How to Analyze Your Social Media Activities With Excel | Social Media Examiner
Paine Publishing | Standards Central
A Marketer's Guide to Understanding Statistical Significance
OBSSR e-Source - Behavioral & Social Sciences Research Methodology guide
Survey Question Bank
CPSA (Cost Per Social Action): The New Pricing Model for Social Media?
Is It Worth It? An ROI Calculator for Social Network Campaigns - frogloop Home~Care2's blog for nonprofits - frogloop
Simple Lesson in Public Sector/Non-Profit Website Return on Investment (ROI)
Public Sector Marketing 2.0
RWJF - Research - Tools & Resources
"Guide to Evaluation Primers" and "Planning and Using Survey Research Projects" from Robert Wood Johnson Foundation