AI Design · Tooling · Workflow Automation · The Home Depot
Most designers use AI tools. I build them. I designed and shipped a production AI agent on Gemini Enterprise that reads our communications standards and drafts work the team can actually use—cutting ideation time and closing a gap left by a lost copywriter.
The ECC team designs customer communications across email, SMS, RCS, and iOS Live Activity. These aren't simple UI surfaces—they're copy-driven. Tone, structure, brevity, and clarity matter enormously. When our copywriter left, that gap hit every project from the first blank screen.
Beyond the copywriting gap, ideation was expensive. Designers would spend significant time at the start of each project deciding where content should live, what to prioritize above the fold, and how to balance competing information needs. These decisions were being made fresh every time—even though we had standards that should have been answering them.
"The standards exist so we stop debating the same things. The agent exists so we stop starting from zero."
I built a production AI agent on Gemini Enterprise that serves as a first-pass communications designer. The agent is connected to our design standards, principles, and copy guidelines—as well as live project context pulled from Confluence and Slack. When given a brief, it generates a grounded starting point: a draft communication that applies our rules, not generic AI instincts.
The hard part wasn't prompting—it was building the context graph: explaining to the agent where the information lives, how different pieces relate to each other, and how to apply them. That's the design work most people skip when they talk about AI tooling.
I iterated through three platforms—Copilot, Gemini Pro, then Gemini Enterprise—before landing on a setup that worked reliably enough for team use.
The agent is used by 10 to 15 cross-functional partners each week—designers, product managers, and stakeholders who need a communications starting point before a design review. It's been part of the team's workflow for one quarter, and it's already changed how ideation sessions run.
Instead of opening a blank Figma file and debating where the badge goes, designers open the agent output and ask "does this look right?" The conversation shifts from generation to evaluation—which is where human judgment actually adds value.
The agent also functions as a tiebreaker. When a stakeholder pushes for a design pattern that conflicts with our standards, the agent's output—which applies those standards automatically—provides an independent reference point. It's no longer just a designer's opinion. It's what our own documented guidelines produce.
The agent launched this quarter, so hard metrics are still accumulating. What's already clear: