The PM Co-Pilot: From Messy Brief to Developer-Ready Spec
When the AI knows your codebase, it can structure your specs, surface edge cases, and write acceptance criteria that actually make sense. A workflow for spec-driven development without the three-hour grind.
As someone building products, using AI to augment your work is super helpful. Usually, the normal case would be you explain something to the AI, and you get the AI to help you with the structuring and asking you questions and clarifying some points that might not be clear.
But if the AI also can read the code base and have the full context of the current platform, then it means that the AI will be able to even more accurately review, understand your code base, review the task or the problems that you are trying to solve, and write something very close to spec-driven development. It will help you write the expected outcome as well which will also be coherent with the platform that you already have.
The Problem: Vague Specs, Slow Feedback Loops
Most of the time, PMs write task cards that are incomplete or unclear. Developers read them, ask clarifying questions. Back-and-forth starts. Days get lost.
Here's what that looks like:
We need to capture and log UTM parameters when users register on our platform. Currently we run Meta ads campaigns, but will expand to other channels in the future like Google Ads and TikTok. When a user registers, extract any UTM parameters from the URL (utm_source, utm_medium, utm_campaign, utm_content, utm_term) and store them tied to that user's record in the database. For example, a user clicking a Meta ad might arrive with ?utm_source=meta&utm_medium=cpc&utm_campaign=summer_sale, and we need to log exactly those values. If no UTM parameters existβsuch as direct traffic, organic search, or word-of-mouthβstore null values for those fields. Expand the existing users table in MySQL to include columns for each UTM parameter.
It's complete thinking. But it's not structured. Developers will ask:
-
What happens if the user comes back on a different device?
-
Do we track first-touch or last-touch attribution?
-
What if UTM params are partial?
The Solution: Template + AI Co-Piloting
Instead of spending hours refining that input yourself, you give it to an AI tool that knows your codebase. You ask it to structure this into a task card using your template. When you hit uncertaintyβlike "what are the edge cases here?" - you ask the AI to walk you through them.
OUTPUT (Structured Task Card):
Card Title: Capture UTM Parameters on User Registration
Description:
Capture and store UTM parameters when users register on the platform. This enables tracking of acquisition sources for marketing attribution (Meta Ads, Google Ads, TikTok, etc.).
UTM Parameters to Capture:
| Parameter | Description | Example |
|---|---|---|
| utm_source | Traffic source | meta, google, tiktok |
| utm_medium | Marketing medium | cpc, email, social |
| utm_campaign | Campaign name | summer_sale, brand_awareness |
| utm_content | Ad/content variant | ad_v1, banner_blue |
| utm_term | Paid search keyword | wellness_music |
Example URL:
https://finetunes.app/landing?utm_source=meta&utm_medium=cpc&utm_campaign=summer_sale
Behavior:
| Scenario | Stored Values |
|---|---|
| User arrives via Meta ad with UTM params | Store all UTM values |
| User arrives via direct traffic (no params) | Store null for all UTM fields |
| User arrives via organic search (no params) | Store null for all UTM fields |
| User arrives with partial UTM params | Store available values, null for missing |
Database Changes:
Add columns to users table:
| Column | Type | Nullable |
|---|---|---|
| utm_source | String | Yes |
| utm_medium | String | Yes |
| utm_campaign | String | Yes |
| utm_content | String | Yes |
| utm_term | String | Yes |
Implementation Flow:
| Step | Action |
|---|---|
| 1 | User lands on site with UTM params in URL |
| 2 | Store UTM params in session/local storage |
| 3 | User completes registration |
| 4 | Extract stored UTM params |
| 5 | Save UTM values to user record in database |
Edge Cases:
| Scenario | Handling |
|---|---|
| User navigates away before registering | Persist UTM params in session/local storage |
| UTM params overwritten by new visit | Keep first-touch or last-touch? |
| User registers on different device | UTM params lost (acceptable) |
Question: First-Touch or Last-Touch Attribution?
- First-touch: Keep original UTM params even if user returns via different source
- Last-touch: Overwrite with most recent UTM params
Acceptance Criteria:
- Database schema updated with UTM columns
- UTM params extracted from URL on landing
- UTM params persisted until registration
- UTM values saved to user record on registration
- Null values stored when no UTM params present
- Works for Meta, Google, TikTok, and any future ad sources
Why This Works
The card is now spec-driven and developer-ready. No ambiguity. Developers know exactly what to build and what to test. The edge cases are visible. Open questions are explicit.
And you didn't spend three hours perfecting it. You gave the AI your raw thinking, asked it to structure it, asked it to surface edge cases, and refined from there.
The Template
You can use this template for any task card. Fill it in yourself, or use it as a starting point with an AI tool. Either way, the structure forces clarity.
## **Task Card Template for Product/Design Specifications**
---
### **Card: [Title β Brief, Action-Oriented]**
**Description:**
[1-2 sentences explaining what this card accomplishes and why.]
---
**Scope / Applies to:**
| Area | Included |
|------|----------|
| [Page/Component/Feature] | β / β |
| [Page/Component/Feature] | β / β |
---
**Current state (if redesign):**
- [What exists today]
- [Pain point or limitation]
---
**Proposed design / behavior:**
| Element | Description |
|---------|-------------|
| [Element name] | [What it does / how it looks] |
| [Element name] | [What it does / how it looks] |
---
**Visual mockup (ASCII or description):**
βββββββββββββββββββββββββββ
β [Visual representation] β
β [of the component] β
βββββββββββββββββββββββββββ
---
**Key behaviors / interactions:**
| Action | Result |
|--------|--------|
| [User action] | [What happens] |
| [User action] | [What happens] |
---
**States (if applicable):**
| State | Visual / Behavior |
|-------|-------------------|
| Default | [Description] |
| Active | [Description] |
| Disabled | [Description] |
---
**Design variations (for designer):**
| Variation | Description |
|-----------|-------------|
| A | [Option A details] |
| B | [Option B details] |
---
**Dependencies / Blocked by:**
| Feature/Card | Relationship |
|--------------|--------------|
| [Card name] | Blocked by / Blocks / Related |
---
**Questions / Open items:**
- [Any decisions still needed?]
---
**Acceptance criteria:**
- [ ] [Specific, testable requirement]
- [ ] [Specific, testable requirement]
- [ ] [Specific, testable requirement]
---
### **Usage notes:**
- **Keep it scannable** β Use tables over paragraphs
- **Be specific** β Avoid vague language like "improve" or "better"
- **Show, don't just tell** β ASCII mockups clarify intent
- **Separate concerns** β One card = one focused feature/change
- **Link dependencies** β Reference related cards by name
- **Designer vs Dev** β Clarify what's for design exploration vs fixed requirements
Getting Started
- Try this with your next feature request
- Pair it with an AI tool that can read your codebase
- When you're unsure about something, ask the AI to walk you through it
- Refine, and ship the card
It's not autopilot. It's co-piloting. And it works.