Home Depot · UX Design · 2022–2024

From dark patterns to a metric that actually meant something

My Role
UX Designer (IC)
Team
Cross-functional · 12 people
Outcome
CSAT 45 → 65 · 600 responses/week
The short version

We were optimizing for the wrong thing — and it showed

The Home Depot Preference Center is where customers manage their communication settings: which emails they receive, how often, about what. It's a product built, in theory, to serve the customer's relationship with the brand. In practice, the team had inherited a single KPI: opt-out rate.

On the surface, this sounds reasonable. Fewer opt-outs means customers want to stay engaged, right? But when I analyzed where our traffic was actually coming from, a different picture emerged. The overwhelming majority of users arrived via opt-out links embedded in emails — they came specifically to leave. The metric wasn't measuring product quality. It was measuring how successfully we could discourage people from doing what they came to do.

The real problem

A metric that defines success as "fewer people escaping" isn't a product metric. It's a containment metric. It was creating pressure to introduce friction and dark patterns rather than to build something people actually valued.

The team wasn't doing this maliciously — they were responding rationally to the incentives they'd been given. But it meant we had no honest signal about whether the product was good, or what we should build next. We were order-takers waiting for direction that would never come, because no one had the data to give it.

I didn't wait for permission to fix the measurement problem

I was the UX designer on this product. That meant my nominal job was to design screens. But I could see that no amount of screen design would matter if we didn't know what problem we were solving or whether we were solving it.

I took it on myself to make the case for a better metric — first internally with my UX manager, then with engineering leads and ICs, and then with the product team up to director level. The argument I made was simple: opt-out rate is not a product health metric. It measures the wrong population doing the wrong action. I backed this up with traffic source data showing how users were actually arriving at the page.

Once I had alignment, I designed, proposed, and personally configured the new feedback system. I also took on the facilitation work — running biweekly sessions to make sure the team actually used what we were measuring.

Building the loop, then making it mean something

01
Diagnose the metric failure

Used site analytics to map where traffic was coming from. Showed that the majority of visits originated from opt-out links — making opt-out rate a measure of how many people succeeded in leaving, not a measure of product value.

02
Build the stakeholder case

Convinced my UX manager, engineering managers and ICs, and the product team (manager, senior manager, director) that the metric needed to change. Presented data. Proposed an alternative.

03
Design and ship the thumbs up/down feedback widget

Designed a simple thumbs up/down button placed at the bottom of the Preference Center page, with an optional open-text field that appeared after a response. Took 4 months from proposal to live. Within weeks we had ~600 responses per week, with roughly half including qualitative feedback.

04
Configure the Qualtrics intelligence layer

Set up a Qualtrics dashboard myself that ingested the open-text responses, analyzed language patterns, and automatically grouped feedback into thematic categories. This gave the team a living view of what users were saying — not just a raw count.

Qualtrics Preferences Feedback Dashboard
The Qualtrics dashboard I configured — showing live CSAT (65.97), the upward trend over time, sentiment-categorized topic bubbles, and verbatim customer feedback grouped by theme. This ran in the background of every biweekly prioritization session.
05
Lead biweekly prioritization sessions

Ran recurring working sessions for the cross-functional team to review the dashboard, debate what the feedback was telling us, and align on what to build next. This was the moment the team stopped being order-takers. We had our own signal. We could make our own decisions.

600
responses per week at steady state
~50%
of responses included qualitative feedback
45→66
CSAT score during my tenure

The hard parts

The biggest tension wasn't technical — it was organizational. Changing a metric means someone has to admit the old one was wrong. That's uncomfortable for teams who've been measured by it.

Before → After
Success = fewer opt-outs. Incentivizes hiding the preference center.
Success = positive feedback rate. Incentivizes building a better product.
No mechanism for users to say what's wrong.
600 data points per week, half with specific language to analyze.
Team waits for direction from stakeholders who don't have data either.
Team owns a backlog driven by real user feedback, reviewed biweekly.

A second challenge: I recognized early that thumbs up/down had a ceiling. It tells you how people feel, not what they trust. And trust was the word our business leadership kept using. Rather than wait for someone to hand me a definition, I took the ambiguity as a design problem.

Turning a leadership buzzword into something measurable

"Trust" was a word our leadership kept using as a north star for the business. Most teams nod at these words and move on. I saw it as an open question worth answering: what does trust actually mean for a communications product, and can we measure it?

I didn't answer it alone. I designed and led two workshops with our full cross-functional team of 12 — engineers, UX, and product — to collectively define what trust means in the context of the Preference Center experience.

What we defined together

👁
Transparency
🔄
Consistency
🎯
Reliability
💬
Ease of Understanding
Personalization

For each dimension, the team aligned on what it would look like in the product, and what a measurable signal for it could be. The trust metric is still in development — but the framework exists, the team is aligned on it, and it gives the product a direction that no other communications product at the company has.

Why this matters strategically

Anchoring the work to a leadership priority ("trust") wasn't luck — it was a deliberate move. Getting things built inside a large organization requires connecting your work to what leadership already cares about. I recognized the word being used and gave it a definition our team could actually act on.

Designing a future state — then letting customers tell me what was still wrong

The CSAT score was rising and the trust metric framework gave us a north star. But I wanted to validate the direction more concretely — so I designed a future state Preference Center and tested it. The mockups reflected both existing customer pain points from our feedback data and patterns from a competitive analysis of Lowe's, Walmart, Amazon, Ace Hardware, and Target.

The research program I designed and ran:

01
15 unmoderated usability tests across 5 tabs

Screened for frequent Home Depot shoppers in home improvement or contracting. Each respondent tested one tab of the new experience — About Me, Marketing Preferences, Order Preferences, Account Preferences, and Privacy — and was asked what they expected to find and how the experience felt.

02
2 semi-structured interviews with Managed Pros

A distinct persona with different needs than DIY customers. Asked for overall feedback on each tab and the new features added — specifically to understand whether the Pro experience required a different approach.

03
Tree test (n=15) to validate information architecture

Asked 15 customers to navigate to specific preferences using the proposed structure. The goal: confirm whether the new tab architecture matched customers' mental models.

The five tabs tested

Each tab of the future state design was tested independently. These are the mockups customers responded to.

About Me tab
About Me — trade, company size, language, accessibility settings
Marketing Preferences tab
Marketing Preferences — personalized "For You" tags, frequency controls, digest creation
Order Preferences tab
Order Preferences — delivery communication toggles and delivery preference defaults
Account Preferences tab
Account Preferences — user management, Pro account rep communications
Privacy tab
Privacy — interest-based ad controls and privacy rights access
Tree test result

26% success rate. 11% directness. Customers could only find what they were looking for 26% of the time — and when they did, only 11% took a direct path. This was a clear signal that the information architecture needed rethinking before anything shipped.

The finding that connected everything

The most important insight didn't come from the tree test — it came from the Marketing Preferences tab. Customers understood the "For You" tag we'd designed to signal personalized recommendations. But they didn't trust that it was actually relevant to them. They understood the label. They just didn't believe it.

The bridge to the trust metric

This finding wasn't a UI problem. It was a trust problem — specifically a transparency and personalization gap. Customers wanted to see how we'd segmented them. That insight is exactly why those two dimensions became central to the trust metric framework. The research didn't just validate the mockups — it validated the direction of the entire measurement strategy.

The future state hasn't shipped yet — the trust metric infrastructure it depends on is still being built. But the research produced a clear, evidence-backed picture of what the next version of the Preference Center needs to do: show customers the logic behind what they're seeing, not just give them controls to manage it.

"The thing I'm most proud of is helping our team become an established product — one with real understanding of what needs to happen — and less of an order-taking establishment."

A team that knows what it's doing and why

CSAT climbed from approximately 45 to 65 during my time on the team — a meaningful shift for a product that had previously had no honest measure of user satisfaction at all.

But the more durable outcome was structural. The team went from measuring a metric that was actively harmful to running a biweekly feedback review process driven by 600 real user responses per week. Engineers, product managers, and designers were in the same room, looking at the same data, deciding together what mattered next.

The trust metric framework gives the product a long-term north star. It's still being built — but it exists because a team of 12 people spent two workshops defining it together, which means they're invested in making it real.

What I'd do differently

I'd move faster on the trust metric. In retrospect, I spent time building consensus around the definition when I could have proposed a provisional definition and iterated. The workshops were valuable — the alignment they created was real — but I think there's a version of this where we're already measuring trust dimensions rather than still designing the measurement. The lesson: sometimes a good-enough framework shipped is more useful than a perfect one still being workshopped.

Next Case Study
Coming Soon