Search
Results
Responses to a COVID-19 Vaccination Intervention: Qualitative Analysis of 17K Unsolicited SMS Replies
The development of effective interventions for COVID-19 vaccination has proven challenging given the unique and evolving determinants of that behavior. A tailored intervention to drive vaccination uptake through machine learning-enabled personalization of behavior change messages unexpectedly yielded a high volume of real-time short message service (SMS) feedback from recipients. A qualitative analysis of those replies contributes to a better understanding of the barriers to COVID-19 vaccination and demographic variations in determinants, supporting design improvements for vaccination interventions. Objective: The purpose of this study was to examine unsolicited replies to a text message intervention for COVID-19 vaccination to understand the types of barriers experienced and any relationships between recipient demographics, intervention content, and reply type. Method: We categorized SMS replies into 22 overall themes. Interrater agreement was very good (all κpooled . 0.62). Chi-square analyses were used to understand demographic variations in reply types and which messaging types were most related to reply types. Results: In total, 10,948 people receiving intervention text messages sent 17,090 replies. Most frequent reply types were “already vaccinated” (31.1%), attempts to unsubscribe (25.4%), and “will not get vaccinated” (12.7%). Within “already vaccinated” and “will not get vaccinated” replies, significant differences were observed in the demographics of those replying against expected base rates, all p . .001. Of those stating they would not vaccinate, 34% of the replies involved mis-/disinformation, suggesting that a determinant of vaccination involves nonvalidated COVID-19 beliefs. Conclusions: Insights from unsolicited replies can enhance our ability to identify appropriate intervention techniques to influence COVID-19 vaccination behaviors.
Create a Qualitative Rubric (Original) - eLearning - University of Queensland
Qualitative grading and feedback with rubrics - Mindlogicx
Understanding fraudulence in online qualitative studies: From the researcher’s perspective
Where Do the 3 Concept Types Come From? | by Indi Young | Inclusive Software | Mar, 2024 | Medium
In my research, I focus on three things that ran through people’s minds when they were working toward something. These three things are: inner thinking, thoughts, pondering, reasoning emotional reactions, feelings, moods guiding principles, personal rules
It’s Time to Change the Way We Write Screeners | Sago
And remember, keeping screeners under 12 questions is the magic number to prevent attrition.
Ikea came into my house. Here's what they said | The Post
Ikea researchers explore Kiwi homes before opening first NZ store Christine Gough, head of interior design at Ikea Australia, is one of 40 Ikea researchers visiting hundreds of Kiwi homes to gauge what products to stock in its Auckland mega store.
Badly designed surveys don’t promote sustainability, they harm it
How to use a new generation data collection and analysis tool? - The Cynefin Co
This is SenseMaker in its most simple form, usually structured to have an open (non-hypothesis) question (commonly referred to as a ‘prompting question’) to collect a micro-narrative at the start. This is then followed by a range of triads (triangles), dyads (sliders), stones canvases, free text questions and multiple choice questions. The reason or value for using Sensemaker: Open free text questions are used at the beginning as a way of scanning for diversity of narratives and experiences. This is a way to remain open to ‘unknown unknowns’. The narrative is then followed by signifier questions that allow the respondent to add layers of meaning and codification to the narrative (or experience) in order to allow for mixed methods analysis, to map and explore patterns.
Balancing Natural Behavior with Incentives and Accuracy in Diary Studies
Eight tips for using a word cloud in market research story finding
IndiKit - Guidance on SMART Indicators for Relief and Development Projects | IndiKit
6 Tips for Better Participant Engagement in Diary Studies
Qualitative Social Media Research Resources - Google Docs
Does eating white bread make you feel lonely - YouTube
Stakeholder Interviews 101
Integrity Initiative - KNow Whitepaper 2022.pdf
How to screen out fraudulent qualitative research participants
A step-by-step guide to user research note taking | by Arnav Kumar | UX Planet
How to Recruit Participants for UX Research
(PDF) Sample size for qualitative research: The risk of missing something important | Peter J DePaulo - Academia.edu
Until the definitive answer is provided, perhaps an N of 30 respondents is a reasonable starting point fordeciding the qualitative sample size that can reveal the full range (or nearly the full range) of potentially important customer perceptions. An N of 30 reduces the probability of missing a perception with a 10percent-incidence to less than 5 percent (assuming random sampling), and it is the upper end of the rangefound by Griffin and Hauser. If the budget is limited, we might reduce the N below 30, but the client mustunderstand the increased risks of missing perceptions that may be worth knowing. If the stakes and budgetare high enough, we might go with a larger sample in order to ensure that smaller (or harder to reach)subgroups are still likely to be represented.
User Diary Studies - An effective research method for evaluating user behavior long-term
Systems Mapping: How to build and use causal models of systems
The question researchers should all stop asking
We want to take the shortcut and ask the why question, but please, resist the urge. Reframe it and you’ll find you are getting a more honest answer that is closer to authentic truth.
Research methods for discovery
Whilst you’re shaping the problem space and then during the first diamond of understanding and defining which user needs to focus on, you should ideally get out of the lab or the office. When you have defined your solution and are iterating on it, that’s the best time to use your go to method — lab usability testing in a lot of cases, remote interviewing is mine. This is because you are likely needing cycles of quick feedback and iteration so you need a tried and trusted method so you can spin up a sprint of research quickly and efficiently. So how about when time and efficiency isn’t quite so important and the quality and depth of understanding or engagement of stakeholders are the key drivers? Here are some examples from my toolkit:
6 Mistakes When Crafting Interview Questions
Dr. Emily Anhalt on Twitter: “11 magic therapy phrases that are useful for every conversation (and why they work)
Yes, You Can Generalize from a Case Study | by Bent Flyvbjerg | Geek Culture | Medium
Paper Prototyping: A Cutout Kit
Using a Translator During Usability Testing (Video)
Sample sizes for saturation in qualitative research: A systematic review of empirical tests - ScienceDirect
Results show 9–17 interviews or 4–8 focus group discussions reached saturation.
How Many Participants for a UX Interview?
How many interviews are enough depends on when you reach saturation, which, in turn, depends on your research goals and the people you’re studying. To avoid doing more interviews than you need, start small and analyze as you go, so you can stop once you’re no longer learning anything new.
Better Than Why - “What Makes You Ask?“
JMIR Infodemiology - Infodemic Signal Detection During the COVID-19 Pandemic: Development of a Methodology for Identifying Potential Information Voids in Online Conversations
Objective: In this work, we aimed to develop a practical, structured approach to identify narratives in public online conversations on social media platforms where concerns or confusion exist or where narratives are gaining traction, thus providing actionable data to help the WHO prioritize its response efforts to address the COVID-19 infodemic. Methods: We developed a taxonomy to filter global public conversations in English and French related to COVID-19 on social media into 5 categories with 35 subcategories. The taxonomy and its implementation were validated for retrieval precision and recall, and they were reviewed and adapted as language about the pandemic in online conversations changed over time. The aggregated data for each subcategory were analyzed on a weekly basis by volume, velocity, and presence of questions to detect signals of information voids with potential for confusion or where mis- or disinformation may thrive. A human analyst reviewed and identified potential information voids and sources of confusion, and quantitative data were used to provide insights on emerging narratives, influencers, and public reactions to COVID-19–related topics. Results: A COVID-19 public health social listening taxonomy was developed, validated, and applied to filter relevant content for more focused analysis. A weekly analysis of public online conversations since March 23, 2020, enabled quantification of shifting interests in public health–related topics concerning the pandemic, and the analysis demonstrated recurring voids of verified health information. This approach therefore focuses on the detection of infodemic signals to generate actionable insights to rapidly inform decision-making for a more targeted and adaptive response, including risk communication.
5 Facilitation Mistakes to Avoid During User Interviews
(1) Julie Zhuo on Twitter: “Is there a term for someone who geeks out on how to get to know someone better? Because I'm definitely in that club. So of course I looooove thinking about interview questions. Thread of my favorite questions to ask folks to un
METAPHORICALLY SPEAKING A Linguist's Perspective on the Power of Unorthodox Questions to Uncover Unique Patient Insights
How to Handle Dominating Participants in UX Workshops: 3 Tactics
Testing Content with Users
Conducting Successful Virtual Focus Groups - Child Trends
DOING FIELDWORK IN A PANDEMIC - Google Docs
Analyzing Qualitative User Data in a Spreadsheet to Show Themes (Video)
Remote Usability-Testing Costs: Moderated vs. Unmoderated
Data collecting: Tips and tricks for taking notes – Dana Chisnell
Why you should be using virtual focus groups :: Social Change
A Practical Guide to Conducting a Barrier Analysis
Using Twitter as a data source: an overview of social media research tools (2019) | Impact of Social Sciences
Chapter 4 Using Twitter as a Data Source: An Overview of Ethical, Legal, and Methodological Challenges - White Rose Research Online
Text as Data
This class covers a range of different topics that build on top of each other. For example, in the first tutorial, you will learn how to collect data from Twitter, and in subsequent tutorials you will learn how to analyze those data using automated text analysis techniques. For this reason, you may find it difficult to jump towards one of the most advanced issues before covering the basics. Introduction: Strengths and Weaknesses of Text as Data Application Programming Interfaces Screen-Scraping Basic Text Analysis Dictionary-Based Text Analysis Topic Modeling Text Networks Word Embeddings