Our latest capability “Ask Viamo Anything” is providing access to the latest AI technology to the digitally disconnected – at no cost to them. It was built and will soon be offered on the Viamo Platform. Ask Viamo Anything works on simple mobile phones without internet access. And because of its use of voice technology, it can even be used by people with low literacy — leapfrogging text-based approaches and truly democratizing access.
Could this guide us towards a structured approach for assessing the level of community involvement in SBC programmes? At the highest level, “Citizen Control“, communities independently lead programmes with full decision-making authority. “Delegated Power“ and “Partnership“ designate significant community influence on programme decisions, either through majority control or collaborative governance. In contrast, “Placation“, “Consultation“, and “Informing“ indicate lower degrees of participation, where community input may be sought but is not necessarily instrumental in shaping outcomes.
Notifications are extremely beneficial to users, but they often demand their attention at inappropriate moments. In this paper we present an in-situ study of mobile interruptibility focusing on the effect of cognitive and physical factors on the response time and the disruption perceived from a notification. Through a mixed method of automated smartphone logging and experience sampling we collected 10372 in-thewild notifications and 474 questionnaire responses on notification perception from 20 users. We found that the response time and the perceived disruption from a notification can be influenced by its presentation, alert type, sender-recipient relationship as well as the type, completion level and complexity of the task in which the user is engaged. We found that even a notification that contains important or useful content can cause disruption. Finally, we observe the substantial role of the psychological traits of the individuals on the response time and the disruption perceived from a notification.
In their landmark 1959 report often referenced in leadership theory, social psychologists John R. P. French and Bertram Raven pinpointed five bases of power: Legitimate: when people perceive that your rank in a formal hierarchy—e.g., manager, CEO, or president—gives you the right to “prescribe” their behavior Reward: when people perceive your ability to distribute rewards for completed tasks or met goals Coercive: when people perceive your ability to distribute punishments and disincentives (the opposite of reward power) Expert: when people perceive your special knowledge or expertise, which causes them to defer to your expertise Referent: when people feel “oneness” with you or a desire to be like you, leading to their respect and admiration of you Referent power is considered the most potent because it doesn’t require that a leader micromanage, use coercion, or reward to influence others. People follow a leader with referent power based on who the leader is and how they behave. According to French and Raven, referent power has the broadest range of influence of any power, allowing it to be leveraged on a large scale.
Results indicated that emotional shift messages generated more talk than single-valence messages because they elicited greater emotional intensity and deeper message processing.
Out of the 93 behavior change techniques that can be used, on average only 7 were chosen, and the most common were related to: 1. Feedback on behavior 2. Goal setting 3. Action planning As the study says: “within the “Goals and Planning” BCT group, only 3 out of 9 BCTs were utilized.
Start with the Quick 2+1™ to find your answer. The next phase is to trust your intuition to Label™ and Mirror™ the circumstances or dynamics that may have led to the confrontation. Then use a little Dynamic Silence™ to allow room for a response from the other side. Once they respond, use mirrors and labels to encourage them to keep talking and gather the information you need to get to the heart of the matter.
In my research, I focus on three things that ran through people’s minds when they were working toward something. These three things are: inner thinking, thoughts, pondering, reasoning emotional reactions, feelings, moods guiding principles, personal rules
Communification - only focusing on marketing communications (promotion) - 8% of marketing
Do you wonder why people are so inconsistent? Why people often seem to contradict themselves? Why they believe things they know aren't true? Why they say “Don't do X and then do that very thing? Robert Kurzban explains why. The reason is that the human mind is modular, made up of a large number of parts with different functions. Sometimes these parts conflict with one another.
This manual includes information about Open Policy Making as well as the tools and techniques policy makers can use to create more open and user led policy.
The new toolkit crosses local, central and international government action. It has many of the elements of the previous framework but also covers new ground. The most obvious is that we have changed the horizontal axis to better reflect the way government works in practice. This has meant including a number of new areas namely, influencing, engaging, designing, developing, resourcing, delivering and controlling (or managing). The vertical axis still follows the same logic from ‘softer’ more collaborative power at the top, down to more formal government power at the bottom of the axis. The update includes many familiar things from nudging behaviour to convening power and also adds new areas like deliberative approaches such as citizen juries. This is the framework for Policy Lab's new Government as a System toolkit. The new Government as a System toolkit framework. When looking across the whole system, it now has 56 distinct actions. Of course this isn’t an exhaustive set of options, you could create more and more detail as there is always more complexity and nuance that can be found in government. Importantly, we want policymakers to be considering how multiple levers are used together to address complex problems.
And remember, keeping screeners under 12 questions is the magic number to prevent attrition.
Over the past decade, behavioural scientists have identified five different holistic effects which can all impact on the overall effectiveness of a behaviour change intervention. Some of these effects or concepts can be positive, whereas others may end up neutralising the effect of any nudge, or worse, having a negative impact: Licensing effects Compensating effects Positive spillover effects Displacement effects Systemic effects or what we are calling ‘nudge fatigue’
Ogilvy UK head of strategy, advertising, Matt Waksman, illustrates and interprets the role of the strategist within advertising and wider society
In the first in his series of columns Ogilvy UK's head of strategy argues that accommodating behaviour - rather than adapting it - might be key to its change
Scientific evidence regularly guides policy decisions1, with behavioural science increasingly part of this process2. In April 2020, an influential paper3 proposed 19 policy recommendations (‘claims’) detailing how evidence from behavioural science could contribute to efforts to reduce impacts and end the COVID-19 pandemic. Here we assess 747 pandemic-related research articles that empirically investigated those claims. We report the scale of evidence and whether evidence supports them to indicate applicability for policymaking. Two independent teams, involving 72 reviewers, found evidence for 18 of 19 claims, with both teams finding evidence supporting 16 (89%) of those 18 claims. The strongest evidence supported claims that anticipated culture, polarization and misinformation would be associated with policy effectiveness. Claims suggesting trusted leaders and positive social norms increased adherence to behavioural interventions also had strong empirical support, as did appealing to social consensus or bipartisan agreement. Targeted language in messaging yielded mixed effects and there were no effects for highlighting individual benefits or protecting others. No available evidence existed to assess any distinct differences in effects between using the terms ‘physical distancing’ and ‘social distancing’. Analysis of 463 papers containing data showed generally large samples; 418 involved human participants with a mean of 16,848 (median of 1,699). That statistical power underscored improved suitability of behavioural science research for informing policy decisions. Furthermore, by implementing a standardized approach to evidence selection and synthesis, we amplify broader implications for advancing scientific evidence in policy formulation and prioritization.
New Metaphors is a creative toolkit for generating ideas and reframing problems.
Thinking Styles are the archetypes that you would base characters on, like characters in TV episodes. (Try writing your scenarios like TV episodes, with constant characters.) Characters think, react, and made decisions based on their thinking style archetype. BUT they also switch thinking styles depending on context. For example, if you take a flight as a single traveler versus bringing a young child along–you’ll probably change your thinking style for that flight, including getting to the gate, boarding, and deplaning.
Free Behavior Design, Innovation and Change Tools These frameworks started out as internal tools we would use on client projects at Aim For Behavior, that would help us save time and create better outcomes for the customers and the companies we were working with. We are always adding more frameworks or iterating the current ones based on the feedback.
100+ open source innovation tools from the greatest design & strategy agencies in the world. Ideal for both offline or online workshops. All tools are pixel perfectly packaged in a vectorized PDF or PNG and can be downloaded for free.
“We tested the effectiveness of different messages aimed at addressing climate change and created a tool that can be deployed by both lawmakers and practitioners to generate support for climate policy or to encourage action,” says Madalina Vlasceanu, an assistant professor in New York University’s Department of Psychology and the paper’s lead author. The tool, which the researchers describe as a “Climate Intervention Webapp,” takes into account an array of targeted audiences in the studied countries, ranging from nationality and political ideology to age, gender, education, and income level. “To maximize their impact, policymakers and advocates can assess which messaging is most promising for their publics,” adds paper author Kimberly Doell, a senior scientist at the University of Vienna who led the project with Vlasceanu. Article: https://osf.io/preprints/psyarxiv/cr5at Tool: https://climate-interventions.shinyapps.io/climate-interventions/
If you’re trying to think and act more creatively and more critically, focus on asking better, more interesting questions of the briefs you’re tasked with answering. What we teach children can and should be applied to our own professional lives, too. A focus on problems and solutions first, promotes consistent, ‘safe’ answers, but won’t move the work on. Spending time on asking and answering better questions will help refine the understanding of a problem and will create the conditions for new, interesting and challenging solutions.
I sometimes make a further suggestion to client teams who have years of experience working directly (via research) with the diversity of the people their organization supports. I suggest they abandon “persona” (a representation of a person) and replace it with “behavioral audience segment” (a representation of a group). (Note: I have begun calling these “thinking styles” to emphasize that a person can change to a different group based on context or experience.)This change allows those qualified teams to get away from names and photos. I don’t suggest this for everyone. Note: “Behavioral audience segment” is the name I use, although there may be a better one. In its defense, Susan Weinschenk uses “behavioral science” to mean what I am trying to represent. And “audience segment” is a common way to express a group an organization is focused on.
Why are your organization’s personas so hard to use? It might be because they are marketing personas, based on the way customers buy what you produce—segments of the market divided up by the way each group tends to make a purchase decision. Maybe what you’re designing for isn’t the purchase process. A problem many organizations run into is relying on only one set of personas. Personas can be derived from any sort of audience segment. There are many ways your organization might have divided the people it supports into segments. There are marketing or buying segments, demographic segments, preference segments, and behavioral segments, to name but a few. Within each of these types of segments, your organization might take different perspectives, such as first-time buyer and return buyer.
But she did explain how researching and designing for the majority or “average user” actually end up ignoring, othering, and harming the people our designs are meant to serve. Indi shared how she finds patterns in people’s behaviors, thoughts, and needs—and how she uses that data to create thinking styles that inform more inclusive design decisions. Indi talked about… Why researchers should look for patterns, not anecdotes, to understand real user needs. What are thinking styles and how to uncover and use them. Why your “average” user often doesn’t exist in the real world, and how we can do better.