yabs.io

Yet Another Bookmarks Service

Search

Results

[https://psycnet.apa.org/fulltext/2023-75617-001.pdf] - - public:weinreich
behavior_change, mobile, qualitative, research, technology - 5 | id:1510408 -

The development of effective interventions for COVID-19 vaccination has proven challenging given the unique and evolving determinants of that behavior. A tailored intervention to drive vaccination uptake through machine learning-enabled personalization of behavior change messages unexpectedly yielded a high volume of real-time short message service (SMS) feedback from recipients. A qualitative analysis of those replies contributes to a better understanding of the barriers to COVID-19 vaccination and demographic variations in determinants, supporting design improvements for vaccination interventions. Objective: The purpose of this study was to examine unsolicited replies to a text message intervention for COVID-19 vaccination to understand the types of barriers experienced and any relationships between recipient demographics, intervention content, and reply type. Method: We categorized SMS replies into 22 overall themes. Interrater agreement was very good (all κpooled . 0.62). Chi-square analyses were used to understand demographic variations in reply types and which messaging types were most related to reply types. Results: In total, 10,948 people receiving intervention text messages sent 17,090 replies. Most frequent reply types were “already vaccinated” (31.1%), attempts to unsubscribe (25.4%), and “will not get vaccinated” (12.7%). Within “already vaccinated” and “will not get vaccinated” replies, significant differences were observed in the demographics of those replying against expected base rates, all p . .001. Of those stating they would not vaccinate, 34% of the replies involved mis-/disinformation, suggesting that a determinant of vaccination involves nonvalidated COVID-19 beliefs. Conclusions: Insights from unsolicited replies can enhance our ability to identify appropriate intervention techniques to influence COVID-19 vaccination behaviors.

[https://medium.com/inclusive-software/where-do-the-3-concept-types-come-from-99a00c2a4edd] - - public:weinreich
qualitative, research, target_audience - 3 | id:1490822 -

In my research, I focus on three things that ran through people’s minds when they were working toward something. These three things are: inner thinking, thoughts, pondering, reasoning emotional reactions, feelings, moods guiding principles, personal rules

[https://www.thepost.co.nz/business/350107777/ikea-came-my-house-heres-what-they-said-and-why-nz-prime-market-swedish] - - public:weinreich
design, qualitative, research - 3 | id:1485140 -

Ikea researchers explore Kiwi homes before opening first NZ store Christine Gough, head of interior design at Ikea Australia, is one of 40 Ikea researchers visiting hundreds of Kiwi homes to gauge what products to stock in its Auckland mega store.

[https://thecynefin.co/how-to-use-data-collection-analysis-tool/] - - public:weinreich
management, qualitative, quantitative, research, storytelling, target_audience - 6 | id:1484377 -

This is SenseMaker in its most simple form, usually structured to have an open (non-hypothesis) question (commonly referred to as a ‘prompting question’) to collect a micro-narrative at the start. This is then followed by a range of triads (triangles), dyads (sliders), stones canvases, free text questions and multiple choice questions. The reason or value for using Sensemaker: Open free text questions are used at the beginning as a way of scanning for diversity of narratives and experiences. This is a way to remain open to ‘unknown unknowns’. The narrative is then followed by signifier questions that allow the respondent to add layers of meaning and codification to the narrative (or experience) in order to allow for mixed methods analysis, to map and explore patterns.

[https://www.nngroup.com/articles/stakeholder-interviews/?utm_source=Alertbox&utm_campaign=2361996408-EMAIL_CAMPAIGN_2020_11_12_08_52_COPY_01&utm_medium=email&utm_term=0_7f29a2b335-2361996408-24361717] - - public:weinreich
management, qualitative, research - 3 | id:1287289 -

[https://www.academia.edu/36897806/Sample_size_for_qualitative_research_The_risk_of_missing_something_important] - - public:weinreich
qualitative, research - 2 | id:1222012 -

Until the definitive answer is provided, perhaps an N of 30 respondents is a reasonable starting point fordeciding the qualitative sample size that can reveal the full range (or nearly the full range) of potentially important customer perceptions. An N of 30 reduces the probability of missing a perception with a 10percent-incidence to less than 5 percent (assuming random sampling), and it is the upper end of the rangefound by Griffin and Hauser. If the budget is limited, we might reduce the N below 30, but the client mustunderstand the increased risks of missing perceptions that may be worth knowing. If the stakes and budgetare high enough, we might go with a larger sample in order to ensure that smaller (or harder to reach)subgroups are still likely to be represented.

[https://www.researchnewslive.com.au/2022/05/24/the-question-researchers-should-all-stop-asking/] - - public:weinreich
qualitative, research - 2 | id:1119095 -

We want to take the shortcut and ask the why question, but please, resist the urge. Reframe it and you’ll find you are getting a more honest answer that is closer to authentic truth.

[https://medium.com/@emmaboulton/research-methods-for-discovery-5c7623f1b2fb] - - public:weinreich
design, qualitative, research - 3 | id:1074484 -

Whilst you’re shaping the problem space and then during the first diamond of understanding and defining which user needs to focus on, you should ideally get out of the lab or the office. When you have defined your solution and are iterating on it, that’s the best time to use your go to method — lab usability testing in a lot of cases, remote interviewing is mine. This is because you are likely needing cycles of quick feedback and iteration so you need a tried and trusted method so you can spin up a sprint of research quickly and efficiently. So how about when time and efficiency isn’t quite so important and the quality and depth of understanding or engagement of stakeholders are the key drivers? Here are some examples from my toolkit:

[https://www.nngroup.com/articles/interview-sample-size/?utm_source=Alertbox&utm_campaign=48f62e824a-EMAIL_CAMPAIGN_2020_11_12_08_52_COPY_01&utm_medium=email&utm_term=0_7f29a2b335-48f62e824a-24361717] - - public:weinreich
design, qualitative, research - 3 | id:958261 -

How many interviews are enough depends on when you reach saturation, which, in turn, depends on your research goals and the people you’re studying. To avoid doing more interviews than you need, start small and analyze as you go, so you can stop once you’re no longer learning anything new.

[https://infodemiology.jmir.org/2021/1/e30971] - - public:weinreich
health_communication, qualitative, research, social_media - 4 | id:744667 -

Objective: In this work, we aimed to develop a practical, structured approach to identify narratives in public online conversations on social media platforms where concerns or confusion exist or where narratives are gaining traction, thus providing actionable data to help the WHO prioritize its response efforts to address the COVID-19 infodemic. Methods: We developed a taxonomy to filter global public conversations in English and French related to COVID-19 on social media into 5 categories with 35 subcategories. The taxonomy and its implementation were validated for retrieval precision and recall, and they were reviewed and adapted as language about the pandemic in online conversations changed over time. The aggregated data for each subcategory were analyzed on a weekly basis by volume, velocity, and presence of questions to detect signals of information voids with potential for confusion or where mis- or disinformation may thrive. A human analyst reviewed and identified potential information voids and sources of confusion, and quantitative data were used to provide insights on emerging narratives, influencers, and public reactions to COVID-19–related topics. Results: A COVID-19 public health social listening taxonomy was developed, validated, and applied to filter relevant content for more focused analysis. A weekly analysis of public online conversations since March 23, 2020, enabled quantification of shifting interests in public health–related topics concerning the pandemic, and the analysis demonstrated recurring voids of verified health information. This approach therefore focuses on the detection of infodemic signals to generate actionable insights to rapidly inform decision-making for a more targeted and adaptive response, including risk communication.

[https://cbail.github.io/textasdata/Text_as_Data.html?fbclid=IwAR1Nl93wTvZlhmVdifK_-I91viDfkH1R69rGwSzE2wM__OOVT_w3mJatgvI] - - public:weinreich
how_to, qualitative, quantitative, research, social_media, twitter - 6 | id:309754 -

This class covers a range of different topics that build on top of each other. For example, in the first tutorial, you will learn how to collect data from Twitter, and in subsequent tutorials you will learn how to analyze those data using automated text analysis techniques. For this reason, you may find it difficult to jump towards one of the most advanced issues before covering the basics. Introduction: Strengths and Weaknesses of Text as Data Application Programming Interfaces Screen-Scraping Basic Text Analysis Dictionary-Based Text Analysis Topic Modeling Text Networks Word Embeddings

Follow Tags


Export:

JSONXMLRSS