How I approached discovering search pain points, exploring these, defining potential solutions, and validating them through rapid prototyping, user testing and involving stakeholders frequently and early. The approach we’ve taken may be adapted and I’m sure it could be improved.
- Background reading
- Data analysis
- Stakeholder workshops
- Rapid prototyping
- User testing
Founded in the 30s, our client offers a product range which spans thousands of poorly sequenced categories , supported by creaky legacy systems.
I collaborated with data scientists, designers and product owners, to validate and address top issues facing mainstream and details-driven customers. These are people who search by attributes (8v batteries), part numbers and broad queries (eg, switches).
Curating poorly sequenced data.
Searching on a website is like a conversation, where the quality of the dialogue can vary wildly from site to site, depending on how the data and interface work together.
A poor experience can feel like a hallway of closed doors. On the other hand, in a good experience every click feels like a valuable opening.
Each search ‘mess’ has it’s own unique characteristics. For example:
- a product range which spans thousands of categories and even more subcategories
- products with several similar attributes
- product groups with different units of measurement
- unfamiliar attribute labels
- divergent, unique customer needs from audiences who include buyers, engineers, makers and designers. These are people who work on imagination defying projects – from the ambient lighting for a coral farm, to making sure saws in a factory are in working order.
Defining potential pain points and principles to design by
Before the project kicked off, we read widely to start understanding search behaviours and potential pain points.
Getting an overview of these behaviours gave us clues on the pain points to look out for which customers might be experiencing.
This also helped to outline key principles to guide test prototypes.
- Make it easy to assess the quality of options
- Make options to go forwards obvious
- Remove ambiguity around what my options are
Defining search journeys to explore
On most websites the most popular short tail queries account for a relatively small percentage of overall queries. This means that optimising these journeys has a relatively high impact on conversions.
We needed a data driven starting point, to focus efforts. Discussions with search tuners led us to segment audience journeys by their query type. Our data scientist , collated top queries relating to
-attribute or feature focussed queries , like 9v batteries
-generic or topic queries, like switches
– part number queries
– mistyped part number queries
– brand-related queries
We ran a workshop, where product owners presented observations from a walk through each of these query types.
We asked them to group the issues they noticed as critical, annoyances and nice to haves. These issues framed hypotheses, research questions and the tasks and design challenges for prototypes we then explored in user testing.
Framing our design and research goals for the first design cycle
Generic search represented a relatively high percentage of all search journeys from our sample, overall. This journey also represented potentially, the most frequently occurring pain points . Let’s talk about a few of our hypotheses on what those were.
Top issues facing generic query searchers
|Big question to answer, to reduce the risk of building the wrong thing|
|Predictive search options are likely to be unhelpful.||How helpful are predictive search options for generic searchers?|
|It may not be easy to tell the difference between categories and compare and select them||How easy or hard is it to differentiate categories?|
|Presenting thousands of products without tools to narrow them is obvious a frustrating||How might product lists on search results be helpful? How might they be more effectively presented?|
With data on our top generic queries, we started recruiting for user tests. By proxy, this also became an opportunity to build insights on a product category which all participants had searched for – leds.
Prototyping potential solutions to test
In the meantime we began framing ‘how might we’ style design challenges to shape test prototypes, using the hypotheses outlined from the walkthroughs. In summary these were:
How might we:
- present relevant predictive search options to generic and attribute searchers
- make it easier to see how queries are reflected in resulting pages
- differentiate categories and make it easier to, compare and select them
- present tools to refine potentially thousands of products
- make options to narrow results obvious at a glance
- label options to narrow in a more familiar way
Exploring third party patterns with caution
Competitor sites were a source of inspiration for how we might address the search issues we’d surfaced. However you definitely can’t take a one size fits all approach when gathering ideas from third party sites, which have different product ranges, technical challenges, and potentially another set of audience expectations to meet.
Outlining core test task flows creates a road map to focus prototyping
I worked closely with another designer at this point to generate concepts and critique them on a weekly basis.
Involving stakeholders in observing, as I ran customer research and surfaced insights
Specialising in non-led moderation and interviewing, I gathered feedback on 20 prototypes which were user tested over 8 design cycles with 70 customers .
Between user tests, we hosted 30 minute debriefs with observers to reflect on what they’d noticed and the inferences they’d made. This was critical for reaching a meaningful consensus on what had happened, and separating isolated actions from patterns of behaviour.
Key points were outlined on a white board which featured print outs of the tested pages and helped us understand what was working, or not, at a glance.
We hosted debriefs remotely using Go to Meeting. In a smaller team, this may not be needed. In larger organisations, it can nurture psychological buy.
During the tests, a number of refinements inevitably came to mind. I sketched ideas as they surfaced in tests and used these to visualise concept refinements for the team.
Shaping future search design cycles
A more official, but informal debrief was hosted after a round of design and research. Here stakeholders noted opportunities, as we presented insights. We dot voted on these afterwards, anonymously. Before deciding on the next big questions to address.
As a new team, the initial conversations that happened at this point were necessarily uncomfortable. In time, we learnt to identify who our final decision maker was and worked with them to understand the biggest risks which they wanted to mitigate, in future design-research cycles. Mark Margolis, offers some practical tips on this part of the process, on the Google Ventures blog.
Deliverables for each cycle were also shared in a monthly newsletter, on Slack, at brown bag events where anyone can hear about how the project is progressing.
Outline core test task flows, before sketching concepts, to manage time and expectations
Between the 1st and 2nd sprint, we learnt that outlining core test task flow, before sketching concepts, helped to to manage time and expectations and get a holistic overview of what we were working towards.
Gather realistic data to include in concepts, to gather richer feedback
We also learnt that as data is such a core part of the conversation between an interface and its audience -during a search- it has to be as realistic as possible. You can’t expect to rebuild an entire search experience, only a core part of a task. Outline that and fill in the details.
To do this, we sense checked content groupings and ordering, with three product specialists and correlated their ideas. Here we asked them how they would order category options for 24v power supplies by relevance (to see if this would help product selection).
More frequent, informal reviews = better quality prototypes
In the first sprint we ran 4- 5 weekly design and technical reviews which was too slow to support the project’s momentum. The pace accelerated to 4 reviews over 10 working days in the 2nd sprint, and finally, daily reviews in sprints 4 and 5. This made it much easier to feel the project’s direction. It also ensured that designers had more opportunities to leverage feedback into prototypes and maximised the quality of insights we surfaced in testing, as a result.
Making sense is political, collaborative and sometimes piecemeal, but worth it.
If search is a conversation, a big challenge is: how do we bridge the gap between what’s entered and how the system understands it?
How are your team handling this user journey?
Are there any ways these approaches could be adapted and improved for your audiences?
Books, presentations and sources which have helped us
- Search Patterns, by Jeffery Callender and Peter Morville, O Reilly
- How to Make Sense of Any Mess, by Abby Covert
- Design the Search Experience, by Tony Russell-Rose and Tyler Tate
- Worksheets to support collaborative sense making, by Abby Covert
- The Secret Lives of Links, Jared Spool, UIE Fundamentals
- Search as a Multi Channel Experience, Pete Bell, UIE All You Can Learn
- Collaborative Information Architecture, Abby Covert
- Ecommerce Search Usability, Baymard report
- Ecommerce Search User Experience, Nielsen Norman Group report
- Start at the End, How to Do Research That Has Real Impact, Mark Margolis