A research plan and report tackling a real business problem: a 17% decline in online sales over three months, despite stable web traffic. The job was to find out where the funnel was leaking and recommend targeted fixes.
Roots Canada was facing a 17% decline in online sales over three months. Web traffic was stable. The order flow had recently been redesigned and received positive qualitative feedback. On paper, nothing looked wrong.
That gap between traffic and conversion is where the real story lives. This case study lays out a research plan to diagnose the cause, validate it with data, and recommend targeted fixes that could be measured against actual sales numbers.
Decline in online sales over three months, despite stable site traffic and a recently redesigned order flow that tested well qualitatively. The mismatch between flow approval and sales outcomes was the entry point for the research.
The primary goal was to uncover the reasons behind the drop in completed online purchases. We were careful not to assume the redesign was the problem just because it was the most recent change. Sales declines are usually multi-causal, and we wanted to look at the experience holistically rather than chasing a single culprit.
To do that, we proposed a four-method research plan that would give us behavioural data, attitudinal data, and validation in one cycle.
Each method covered a blind spot of the others. Usability testing showed us behaviour. Surveys gave us reasoning. A/B testing validated proposed changes with measurable data. Analytics tied everything to actual user flow at scale.
Identify issues with the current website layout from the user's perspective. Watch where they hesitate, click wrong, or give up.
Surveys tell you what users say. Usability testing shows you what they actually do, which is often different.
Collect data on user experiences, layout opinions, and the specific issues that prevented purchase completion.
Reaches users at scale and surfaces patterns that one-on-one testing can't quantify.
Compare new design hypotheses against the existing layout to evaluate measurable impact on sales and engagement.
Moves recommendations from "we think this would help" to "the data shows it did."
Verify survey and usability findings against real user behaviour at scale. Track engagement and identify which pages need the most help.
Grounds qualitative research in quantitative reality. Helps prioritise which findings to act on first.
The research surfaced three distinct issues, each contributing to the drop in different ways. None of them was the redesigned order flow itself — which is exactly why a one-fix assumption would have been the wrong move.
Users found it difficult to navigate between categories and sub-categories. The structure had grown over time and no longer matched how customers thought about the products.
Simplify the IA by merging overlapping categories and sub-categories for a more intuitive browse.
A lack of styling information made decision-making harder. Customers couldn't picture how the item fit into an outfit, which is critical for a clothing brand built on lifestyle storytelling.
Add a "Styled by the model" section and styling suggestions on every product page.
Users had to manually filter by gender and category every visit. For repeat customers, this felt like the site had forgotten them, despite stable traffic from logged-in users.
Implement automatic product categorisation by gender preference to save time on every return visit.
To run the full research cycle and present findings to stakeholders, we proposed an 8-week implementation plan with clear phases and deliverables at each step.
Research without measurable outcomes is just opinion with footnotes. We defined three concrete metrics tied to business goals, each one trackable through Google Analytics so the team could see results in real time.
A minimum 10% sales lift with each new feature or layout change tested. Below that, the change isn't worth shipping.
Track pages per visit and additional product page views. Higher engagement signals the IA changes are working.
Improved session duration and reduced bounce rates indicate the redesigned experience is keeping users on-task.
Every research plan has constraints worth naming upfront. Naming them protects the work and keeps stakeholders calibrated on what to expect.
The 17% sales drop wasn't going to be solved by redesigning the order flow again. The research plan was structured to surface the real causes (IA, product information, and category selection friction) and to validate fixes with measurable lift before recommending them at scale.
The core idea: most product problems aren't visual problems, even when they look that way. They're misunderstandings about how customers actually use the site, hidden behind metrics that look fine on the surface.
Stable traffic with falling sales is a research problem, not a redesign problem.
Diagnose before you prescribe. The temptation when sales drop is to redesign something obvious. The harder work is figuring out which thing to redesign, and that requires patience.
Multi-method beats single-method. No one research approach gives you the whole picture. Behaviour, attitude, and analytics each tell different parts of the story.
Tie findings to dollars. Recommendations land harder when they're paired with target metrics. "Improve the IA" is weak. "Improve the IA to lift conversion 10%" is a project anyone can rally behind.