17%
Case Study 04 UX Research / Conversion Optimization

Roots Canada — diagnosing a sales drop.

A research plan and report tackling a real business problem: a 17% decline in online sales over three months, despite stable web traffic. The job was to find out where the funnel was leaking and recommend targeted fixes.

Client
Roots Canada
(case study)
Role
UX Research
Strategy & Planning
Team
Shrey Patel
Daniel Hartmann
Year
December 2023
01 / Executive Summary

Stable traffic. Falling sales. Something between them was broken.

The problem

Roots Canada was facing a 17% decline in online sales over three months. Web traffic was stable. The order flow had recently been redesigned and received positive qualitative feedback. On paper, nothing looked wrong.

That gap between traffic and conversion is where the real story lives. This case study lays out a research plan to diagnose the cause, validate it with data, and recommend targeted fixes that could be measured against actual sales numbers.

17%

Decline in online sales over three months, despite stable site traffic and a recently redesigned order flow that tested well qualitatively. The mismatch between flow approval and sales outcomes was the entry point for the research.

02 / Approach

Find the actual friction. Don't assume.

Research goal

The primary goal was to uncover the reasons behind the drop in completed online purchases. We were careful not to assume the redesign was the problem just because it was the most recent change. Sales declines are usually multi-causal, and we wanted to look at the experience holistically rather than chasing a single culprit.

To do that, we proposed a four-method research plan that would give us behavioural data, attitudinal data, and validation in one cycle.

03 / Methodology

Four methods, layered on purpose.

Why four

Each method covered a blind spot of the others. Usability testing showed us behaviour. Surveys gave us reasoning. A/B testing validated proposed changes with measurable data. Analytics tied everything to actual user flow at scale.

Method 01

Usability Testing

Purpose

Identify issues with the current website layout from the user's perspective. Watch where they hesitate, click wrong, or give up.

Why it matters

Surveys tell you what users say. Usability testing shows you what they actually do, which is often different.

Method 02

Online Surveys

Purpose

Collect data on user experiences, layout opinions, and the specific issues that prevented purchase completion.

Why it matters

Reaches users at scale and surfaces patterns that one-on-one testing can't quantify.

Method 03

A/B Testing

Purpose

Compare new design hypotheses against the existing layout to evaluate measurable impact on sales and engagement.

Why it matters

Moves recommendations from "we think this would help" to "the data shows it did."

Method 04

Google Analytics

Purpose

Verify survey and usability findings against real user behaviour at scale. Track engagement and identify which pages need the most help.

Why it matters

Grounds qualitative research in quantitative reality. Helps prioritise which findings to act on first.

04 / Key Findings

Three problems, each fixable.

What we found

The research surfaced three distinct issues, each contributing to the drop in different ways. None of them was the redesigned order flow itself — which is exactly why a one-fix assumption would have been the wrong move.

Finding 01

Information architecture fights the user.

Users found it difficult to navigate between categories and sub-categories. The structure had grown over time and no longer matched how customers thought about the products.

Recommended solution

Simplify the IA by merging overlapping categories and sub-categories for a more intuitive browse.

Finding 02

Product pages under-style the product.

A lack of styling information made decision-making harder. Customers couldn't picture how the item fit into an outfit, which is critical for a clothing brand built on lifestyle storytelling.

Recommended solution

Add a "Styled by the model" section and styling suggestions on every product page.

Finding 03

Manual category selection wastes attention.

Users had to manually filter by gender and category every visit. For repeat customers, this felt like the site had forgotten them, despite stable traffic from logged-in users.

Recommended solution

Implement automatic product categorisation by gender preference to save time on every return visit.

05 / Implementation Plan

Eight weeks, four phases.

The plan

To run the full research cycle and present findings to stakeholders, we proposed an 8-week implementation plan with clear phases and deliverables at each step.

Weeks 1 — 2
Preparation
  • Create screener and moderator script for usability testing
  • Develop and submit online survey questions for approval
  • Recruit participants for usability testing and surveys
Weeks 3 — 4
Data Collection
  • Conduct usability testing sessions and online surveys
  • Analyse data and validate findings against Google Analytics
Weeks 5 — 7
Testing & Analysis
  • Develop A/B tests based on the strongest insights
  • Run A/B tests and analyse results against baseline
Week 8
Presentation
  • Prepare and present findings and recommendations to stakeholders
06 / Success Metrics

How we'd know it worked.

Defining success

Research without measurable outcomes is just opinion with footnotes. We defined three concrete metrics tied to business goals, each one trackable through Google Analytics so the team could see results in real time.

Metric 01
10% lift
Sales increase target

A minimum 10% sales lift with each new feature or layout change tested. Below that, the change isn't worth shipping.

Metric 02
engagement
Pages per visit

Track pages per visit and additional product page views. Higher engagement signals the IA changes are working.

Metric 03
bounce
Session duration

Improved session duration and reduced bounce rates indicate the redesigned experience is keeping users on-task.

07 / Limitations

What this research can't tell you.

Being honest

Every research plan has constraints worth naming upfront. Naming them protects the work and keeps stakeholders calibrated on what to expect.

Challenges

  • Heuristic evaluation can miss issues that only show up under realistic use, which is why usability testing and surveys were essential.
  • Recruiting participants who match the actual customer base requires care. A bad sample produces confident wrong answers.

Limitations

  • A/B testing surfaces the impact of small, measurable changes. Bigger structural problems (like full IA rebuilds) need redesign cycles, not split tests.
  • Findings assume the data collected is precise and that user feedback reflects genuine intent rather than performance for the moderator.
08 / Conclusion

Diagnose the right thing. Then prove it.

What this work was about

The 17% sales drop wasn't going to be solved by redesigning the order flow again. The research plan was structured to surface the real causes (IA, product information, and category selection friction) and to validate fixes with measurable lift before recommending them at scale.

The core idea: most product problems aren't visual problems, even when they look that way. They're misunderstandings about how customers actually use the site, hidden behind metrics that look fine on the surface.

"

Stable traffic with falling sales is a research problem, not a redesign problem.

What I learned

Diagnose before you prescribe. The temptation when sales drop is to redesign something obvious. The harder work is figuring out which thing to redesign, and that requires patience.

Multi-method beats single-method. No one research approach gives you the whole picture. Behaviour, attitude, and analytics each tell different parts of the story.

Tie findings to dollars. Recommendations land harder when they're paired with target metrics. "Improve the IA" is weak. "Improve the IA to lift conversion 10%" is a project anyone can rally behind.

Next Project / 05

McDonald's app.

Read the case study