Tech Lead notes from User Research London 2019

As part of working at Redgate I was lucky enough to go to User Research London 2019. It’s even better for me as a Tech Lead on SQL Monitor because, although we do a lot of user research at Redgate, you don’t always get the opportunity to focus on learning more about it as a Technical Lead. It’s a growing conference with around 350 attendees this year.

Key experience indicators for product management, Tomer

A lot of people care about launching the product more than landing it. Launching something is not the end, it’s just the beginning ~10%.

Prefer specific features of your product and measure them in context. Ask about the product just after they have used it and target the person who used it, not the person who set it up or booked the service. Limit yourself to the 3 features people care about and measure them using ratios (e.g. 7-day active usage as a ratio of total users).

HEART is a way to think about what to measure. It stands for Happiness, Engagement, Adoption, Retention, Task Success.

To measure Happiness consider ANPS. ANPS is Actual Net Promoter Score, instead of rating on a scale with NPS which asks the hypothetical “How likely are you to recommend us?”, you can ask whether you have recommended the product in the last week. Although you get a score, you have no idea how to improve your score. With Retention, look at the time to churn.

See Google’s HEART Framework for Measuring UX for more information.

My key takeaways: measuring specific product features is a good idea, to help you know where to focus, and that you should measure after releasing the product/features. It’s one of the things we find hardest at Redgate, sometimes feeling like we have to make make make, rather than make, release, reflect, improve.

Don’t go it alone- turning your doubters into defenders, Sabine

What went wrong? You did the research, analysis, and presentation and in the end, the team did nothing based on it.

You should educate the team about what research is, speak up to assert its value, and collaborate with them.

Proactive education can prevent having to justify which tools you use further down the line as it builds a basic level of understanding. There is only so much validation you can do before releasing, research is more about fixing pain points than being a crystal ball to predict huge product success.

Speaking can be difficult. You should encourage the team to start with the question and then take ownership to figure out the best way to answer it given timeframes and other constraints.

To improve collaboration, methodically build it into your process to make sure collaboration happens. Give jobs to people on the team and say thank you. Set recurring 1-1 meetings. Consider a 3-hour timeslot to collaborate on design (and bring food!). At Facebook they’re slogan is “Stronger Together”, design with a capital d.

My key takeaways: UX onboarding is important as you often get the same questions/lack of understanding from new starters. We shouldn’t expect research to be a crystal ball but we can use it to improve the product before release. It’s also good to respect UX specialists more and let them chose the appropriate research given the constraints (developers always tempted to say “do a survey!”).

Research operations, Emma

Research Ops help people who do research to do their best work. It is a layer that sits above all the management choices around user research. This includes GDPR, tool choice, recruitment, etc.

My key takeaways: Research Ops is a role which is done by companies whether it is made explicit or not.

Psychological safety for researchers, Tristan

As researches for the Government Digital Services, they worked with the Police. This meant they were exposed to horrific images.

The project was voluntary – no one had to do it. Before the project started, they put safety checks to make sure the exposure was as limited as possible. They also selected only people capable of withstanding the exposure, using psych assessments to judge.

During the project, they locked down information to private channels and kept the materials secure (to prevent accidental exposure). They prepared for the psychological risks by avoiding overloading with multiple projects and sudden changes in scope/timings, plus using psychologists for support and to check health midway through.

My key takeaways: some projects can cause psychological damage and we should make sure people are ready to work on such work otherwise they could be harmed. Reducing general stress around the project can also help with this.

The Selfish Giant, Nabeeha

Nabeeha was left Oscar Wilde’s The Selfish Giant in an envelope with an inscription “User Research is a Team Sport”. This upset her as she wishes she had gotten the feedback face-to-face.

It also made her realise she had multiple names over her decade in the tech industry, from Usability Engineer to User Researcher.

As a user researcher, she felt that all analysis had to be done as a team, even when overwhelm started to set it. She realised that research by committee is much worse than design by committee. Weak research is worse than no research and can be unethical.

An example: heart surgery is a team sport but not everyone participates in all aspects of the job, e.g. the heart surgeon will lead the operation.

We need to acknowledge roles and responsibilities, set limit and define guiding principles. The role of the researcher is to converge multiple perspectives to help the whole team see the bigger picture.

My key takeaways: user researches often work on teams but that shouldn’t stop the user research taking charge and leading the research. Taking charge and setting boundaries do not stop user research from being a team sport.

Researching voice UX, Charlotte

Charlotte has worked on voice UX for two BBC skills on Alexa via “Alexa open CBeebies”.

They use scripts to communicate the logic to the developers, however, this isn’t good for research with end users.

Separate design and tech - your voice becomes your prototyping tool. You take rough and ready audio recordings into your research and refine them. Wizard Of Oz testing is good here. A “wizard” tricks the user into thinking they are interacting with an intelligent system, but it’s really a person behind the curtain responding in real time with a set of prototype recordings. It’s crucial to test in context.

However, it still required late stage usability testing to check that the game works with Alexa, which reveals different insights to the human-controlled wizard-of-oz prototype. Things often go wrong. Kids were responding early and sometimes using unexpected variations of words which could launch different games rather than continuing the current one.

My key takeaways: some projects benefit from multiple stages of UX testing and this voice UX case study is a strong example of that. Wizard Of Oz testing is a key technique for getting early feedback with less investment.

Creativity in research, Dalia

We often think of creativity as a divine ability to conjure up something where there was nothing before. But creativity manifests itself in many ways over many fields.

There are two types of creative minds – H-creative (historically) and P-creative (borrowing from another domain). P-creativity is everyday creativity.

The different levels of P-creativity (Sanders and Stappers, 2013) are doing, adapting, making and creating. An example of this with cookie dough is: buying cookie dough, adding chocolate chips, making dough from scratch, inventing a new cookie recipe e.g. cornflake cookies. This is called the everyday creativity framework

Dalia worked on the checkout experience at Shopify. It was difficult to connect the usability testing with purchase intent. She adapted an existing technique from another discipline to demonstrate that stress during the checkout experience negatively influence purchase intent.

Discount stacking required more creativity, making a number of games/scenarios to uncover the rules for discount stacking from the people requesting it as a feature. No one could define the rule but they could say how much things should cost given multiple discounts and, with enough evidence, this exposed the rules.

My key takeaways: be P-creative, try to use or adapt techniques from different disciplines to tackle unusual problems.

Considering research, Dave Hora

Ask your team “How will learning this change what we do?” What impact will it have? If you do all the work and nothing changes then you shouldn’t have done it.

Dave showed us a framework for experiences. Action is the first core that builds a user experience. Then there is Intent (e.g. JTBD). And lastly, there is Culture & Context. Around all of this are organisational assumptions around who we are and what we do plus expectations/mental model of the user (layer of emotion).

User research connects organisational assumptions to the needs of the people we serve. Without this, you can get echo chambers which cause organisations to fail. Also Conway’s Law can come into play and cause sub-par user experiences, when two teams work on connected apps but do not handle failure cases across apps (e.g. missing ingredients in grocery supply/demand apps).

Things change more rapidly the closer you are to the touch-point of the experience, and more slowly as you go from experience to goals/motivations, to the real world. Also from touchpoint to the information environment and organisation. Change takes time as we increase our insights from experience.

Acting on user research requires people to listen and change what they do based on new information. This can take time.

My key takeaways: organisational assumptions can cause echo chambers if you don’t listen to user research.

redgate  ux 

See also