Strategy · Metrics · UX Design · The Home Depot
For two years I designed preference experiences at Home Depot with no design system, shifting priorities, and a constant stream of requests. Then I fixed the process from the inside—and used real customer data to build the roadmap that changed what the team worked on next.
I joined The Home Depot's Data Privacy team in November 2021 as the sole UX designer on the Preferences experience. The team's mandate was to let customers manage their marketing preferences—a seemingly simple surface that quietly touched millions of customers and a complex web of internal stakeholders.
There was no design system applied to the space. No components in Figma that matched what was live. No shared vocabulary between design and engineering. And requests kept coming—from product, from legal, from the business—each one needing to move fast.
"The ask was clear. The infrastructure to deliver it well wasn't. So I built it while I worked."
In the early days, creating a new design meant taking a screenshot of the live page, opening it in Figma, and editing on top of the image. For more complex updates, I'd work directly in Chrome's inspect element to mock up changes before turning them into redlines for engineers.
It worked—but barely. The handoff quality was inconsistent. Engineers had to guess at spacing and states. Review cycles stretched because there was no shared source of truth for what the UI should look like.
At the same time, priorities were unstable. Requests would come in, get half-designed, then get paused or changed. I had to stay in motion without the scaffolding that usually makes motion efficient.
Over three-plus years I shipped a steady stream of UX improvements—each one a real customer-facing change to how Home Depot manages preferences. In parallel, I was building the internal infrastructure to make each subsequent update faster.
Each of these required stakeholder alignment, engineering handoff, and QA across a surface I was still learning. None of them had a clean design system to pull from when they started.
About six months in, I started building a working component library in Figma—not from a design system, but from scratch, reverse-engineered from what was actually live.
I created auto-layout templates for the Privacy and Preferences pages, and built roughly 20 design components based on the UI that engineering had already shipped. Two master templates, all auto-layout, all reusable.
Before this, I was stitching together screenshots and hoping engineers could interpret what I meant. After: I could hand off complete, annotated designs with states, spacing, and component behavior clearly documented. Design time dropped by more than 50%. More importantly, the accuracy of handoffs improved—engineers stopped guessing.
About a year in, I turned my attention to how we were measuring success. The team's primary metric at the time was opt-out rate—how many customers were unsubscribing. That's not a measure of experience quality. It's a measure of damage.
I identified CSAT (via a thumbs up / thumbs down widget) as the right metric. It was fast to implement, widely understood internally, and gave us actual signal about customer sentiment rather than just behavior.
I got the team aligned on the metric, designed and shipped the widget, and started collecting structured feedback. We tracked which customers were leaving feedback and what they were asking for.
"The data didn't just tell us how customers felt. It told us what to build next."
From that feedback I built a prioritized roadmap using a 2x2 framework—balancing customer need against business feasibility. The top items that emerged: transactional preferences (letting customers control when they hear from us about their orders) and language preferences.
Both have since been prioritized. Channel preferences have shipped to a subset of customers. The roadmap didn't just sit in a deck—it changed what the team worked on.