In my workflow, AI isn't just a shortcut. It's not a replacement for good judgment. It's a partner that makes my execution more effective, and my focus sharper.
There's no universal playbook. I use what the problem needs. Some projects demand deep research synthesis. Others need rapid concept pressure-testing, or an interface designed to operate alongside a model from day one. The work determines the approach, not the other way around.
Raw workshop notes (whiteboard captures, stakeholder comments, discussion threads) became a Design Thinking Brief in hours instead of days. The brief became the document that kept data engineering, product management, and design pointing in the same direction without a single alignment meeting to explain it.
On the same engagement: hundreds of CSAT responses per service distilled into three actionable themes. Qualitative feedback that would have been inaccessible at that volume became a first-class signal on the dashboard.
Half the user stories came from direct interviews. The other half came from analyzing thousands of behavioral sessions — patterns that only became visible at scale; each story was validated against actual usage logs before anything went on the roadmap.
When a product is AI-native, the UX work lives inside the system's decision model. On this project, that meant designing the Workflow Presentation Rules: the structured logic governing when the system uses conversational language, structured actions, or event-based notifications. The difference between a tool that feels like a thinking partner and one that feels like a search bar lives in that layer.
The UX challenge on this project: make automated infrastructure recommendations feel trustworthy and legible without requiring users to reverse-engineer why each choice was made. The system guides. It never decides for you.