TL;DR: This post shows how to write effective prompts for Wrike AI Agents so they behave like reliable team members. It explains what agents can and can’t access in Wrike, core prompt patterns (clear instructions, mapping tables, conditional rules, smart assignment, name extraction), and common pitfalls to avoid (vague asks, assumed context, chained actions, no fallbacks, math). It also includes testing tips for the Playground, ready-made prompt templates for typical workflows, and a short checklist to validate your prompt before going live.
Hi Community 👋
We’ve put together an AI Agent prompt writing guide to help you get more reliable, predictable results from your agents in Wrike. This post walks through how agents “think,” how to structure effective prompts, common pitfalls to avoid, and ready-to-use templates for real-world use cases like intake validation, smart assignment, and risk reporting.
The Golden Rule 💫
Write prompts like you’re explaining a task to a new team member who is smart, but doesn’t know anything about your company yet. Be clear and specific about what you want them to do, but don’t over-prescribe how they should do it. Let the agent figure out the best way to get there based on the context it can see in Wrike.
How Agents Think 🤔
Before writing prompts, understand what your agent can see:
Agents CAN read:
- Task name, description, and comments.
- Custom field values (text, numbers, drop-downs).
- Current status and assignee.
- User profiles (name, role, department, timezone).
- Folder/project location.
- Formula/calculated field values.
Agents CANNOT read:
- Data outside their assigned scope.
Key behavior: Agents infer context from everything they can see including task titles. If your task is named "Social Media Campaign," the agent may assume it belongs to your Social team even if it's in a different folder.
Prompt Structure That Works ✔️
Pattern 1: Simple Instruction
For straightforward tasks, be direct.
Example:
You are a triage assistant. When a new task is created, read the
description and set the Priority field based on these rules:
- Contains "urgent," "ASAP," or "blocking" → Critical
- Contains "deadline" or a specific date → High
- Contains "when possible" or "low priority" → Low
- Everything else → Medium
Why it works: Clear role, specific trigger context, explicit rules with examples.
Pattern 2: Mapping Table
When you need to map inputs to outputs, use a simple arrow format.
Example:
Set the Goal field based on which folder the task is in:
socials → reach
email newsletter → engagement
outbound → lead generation
content and sales material → (leave blank)
Why it works: Easy to read, easy to maintain, easy to extend. When you add a new folder, just add a new line.
Pattern 3: Conditional Logic
For if/then scenarios, be explicit about conditions.
Example:
When the status changes to "Blocked":
1. Check the Blocker Reason field
2. If it mentions another team or external dependency:
- Post a comment tagging the task owner asking for an ETA
3. If it mentions missing information:
- Post a comment asking the requester to provide details
4. If the reason is unclear:
- Post a comment asking for clarification on what's blocking progress
Why it works: Each condition has a specific outcome. No ambiguity about what to do.
Pattern 4: Assignment with Context
When assigning work, give the agent context about your team.
Example:
Assign this task to a team member based on the task type:
- Design tasks (mentions "mockup," "visual," "UI") → assign to Design Team members
- Copy tasks (mentions "writing," "content," "blog") → assign to Content Team members
- Technical tasks (mentions "bug," "code," "API") → assign to Engineering Team members
Prefer assigning to the team member with the lowest current workload.
If you can't determine the type, assign it to the Project Lead.
Pattern 5: Name Extraction
When custom fields contain formatted data (like "JD John Doe" or "EMP-123 Sarah Smith"), tell the agent how to parse it:
Example:
The Artist field contains initials followed by the full name
(example: "AEM Ashley Morris").
When this task needs reassignment, extract the NAME PORTION
of the Artist field and assign the task to that person.
Why it works: Explains the format, tells the agent exactly what to extract.
Common Mistakes and Fixes 🪛
Mistake 1: Being Too Vague
Not ideal:
Help manage this project.
Ideal:
You monitor this project for risks. Every day, check for:
- Tasks overdue by more than 2 days
- Tasks with no assignee
- Tasks blocked for more than 1 week
Post a summary comment listing any issues found.
Mistake 2: Assuming Context
Not ideal:
Route tasks to the right team.
Ideal:
Route tasks based on the Category field:
- "Marketing" → assign to Marketing Team
- "Sales" → assign to Sales Team
- "Support" → assign to Support Team
- If Category is empty, post a comment asking the requester to specify.
Mistake 3: Expecting Sequential Dependent Actions
Not ideal:
First check if the task is urgent, then if it is, update the priority field, then based on the new priority, assign to the right person.
Why it fails: Multi-action agents run actions independently and in parallel. Action B can't use the result of Action A.
Ideal (split into separate concerns):
Agent 1 - Triage:
"Set the Priority field based on urgency signals in the description."
Agent 2 - Assignment (triggers on Priority field change):
"When Priority changes, assign based on these rules:
- Critical → Senior Team Lead
- High → Available team member with lowest workload
- Medium/Low → Round-robin among team"
Mistake 4: No Fallback Behavior
Not ideal:
Categorize the request type.
Ideal:
Categorize the request by setting the Type field:
- Bug report → "Bug"
- Feature idea → "Enhancement"
- Question → "Support"
- Complaint → "Escalation"
If you cannot determine the type, set it to "Needs Review"
and post a comment: "Unable to auto-classify. Please review
and set the Type field manually."
Mistake 5: Company Jargon Without Explanation
Not ideal:
If it's a P1, escalate to the Tiger Team.
Ideal:
If the Priority field is "Critical" (also called P1 internally),
assign to the Incident Response group (our "Tiger Team" for
urgent issues).
Mistake 6: Combining Similarly-Named Items in One Action
Not ideal:
Find the Gate1 task and update Gate1Date field.
Find the Gate2 task and update Gate2Date field.
Why it fails: Agents can confuse siblings with similar names (Gate1 vs Gate2, Phase1 vs Phase2) even when you specify explicitly. We've seen cases where updating Gate1 accidentally changed Gate2's values.
Ideal:
Action 1: Find the task containing "Gate1" in the title.
Update the Gate1Date field with its due date.
Action 2: Find the task containing "Gate2" in the title.
Update the Gate2Date field with its due date.
Split similarly-named items into separate actions. One action per item = less confusion.
Mistake 7: Asking Agents to Do Math
Not ideal:
Calculate the overdue percentage: (actual days - baseline days) / baseline days × 100. Then assign a score: 0-10% = 0 points, 10-20% = 3 points, >20% = 5 points.
Why it fails: AI agents are language models, not calculators. They can reason about numbers and sometimes get calculations right, but they are fundamentally unreliable for arithmetic. We've seen cases where an agent's reasoning correctly states "the score should be 3" but then writes 0 to the field.
The reality: Agents cannot read formula/calculated fields yet. So you can't even offload the math to Wrike and have agents interpret the result.
Current workarounds:
- Use Wrike Automations to copy formula results to a regular field:
- The formula field calculates the score.
- Automation triggers when the formula field changes.
- Automation copies value to a regular number field.
- The agent reads the regular field.
- Pre-categorize with formulas, copy the category:
- Formula outputs text: =IF([Overdue%]>20, "High Risk", IF([Overdue%]>10, "Medium Risk", "Low Risk"))
- Automation copies to a regular dropdown field.
- Agent reads the dropdown and acts.
- Skip agents for calculation-heavy workflows:
- Some workflows are better suited to Wrike Automations + formula fields.
- Agents shine at interpretation and judgment, not math.
The rule: Don't ask agents to calculate. And until formula field support arrives, plan your field architecture so agents read regular fields that get their values from automations.
Agents are excellent at:
- Reading calculated values and deciding what to do.
- Categorizing results into buckets (high/medium/low).
- Explaining what numbers mean in context.
- Routing work based on thresholds.
Agents are not ideal at:
- Arithmetic (addition, subtraction, percentages).
- Counting items accurately.
- Comparing precise numerical values.
- Multi-step calculations.
Testing Your Prompts
Always use the Testing Playground before deploying.
Step 0: Verify What the Agent Can See
Before writing your actual prompt, start by asking the agent what context it has access to. Create a simple test prompt:
Tell me:
- What is the task title?
- What is the task description?
- What is the value of the Goal custom field?
- What is the current status?
- Who is the assignee?
- What folder is this task in?
Run this in the Playground against your target item. The agent's response tells you exactly what data it can read, and what it can't. If a field comes back empty or the agent says it can't access something, you know not to rely on it in your real prompt.
This takes 30 seconds and prevents hours of debugging prompts that reference data the agent can't see.
Step 1: Select a Realistic Test Item
Pick a task with a name, description, and field values similar to what the agent will actually encounter.
Step 2: Check the Reasoning
The playground shows you what the agent "thought." If it made a wrong decision, you'll see why.
Step 3: Test Edge Cases
- What happens with minimal information?
- What happens with conflicting signals?
- What happens with unexpected values?
Step 4: Refine Based on Reasoning
If the agent misunderstood, don't just fix the output. Fix the instruction that led to the misunderstanding.
Prompt Templates by Use Case
Intake Validation
You validate incoming requests. When a new task is created, check that it has:
- A clear description (more than 20 words)
- A due date specified
- The Requester field filled in
If anything is missing, post a comment:
"Thanks for your request! Before we can start, please add: [list missing items]"
If everything is complete, post:
"Request received and validated. We'll begin work shortly."
Risk Reporter (Scheduled)
You are a project health monitor. Run daily and check all tasks in this project for:
1. Overdue tasks (past due date with status not Complete)
2. Stalled tasks (no updates in 7+ days, status not Complete)
3. Blocked tasks (status is Blocked)
4. Unassigned tasks with approaching due dates (within 3 days)
Post a summary comment with sections for each risk type.
If no risks are found, post: "Daily health check: All clear."
Smart Assignment
Assign new tasks to team members based on expertise and availability:
Expertise matching:
- Task mentions "data" or "analytics" → prefer Alex or Jordan
- Task mentions "customer" or "support" → prefer Sam or Riley
- Task mentions "design" or "creative" → prefer Morgan or Casey
Among matching experts, assign to whoever has fewer active tasks.
If no expertise match, assign round-robin among all team members.
Always include a comment explaining why you chose this assignee.
Status Sync
When the status changes, update the Phase field to match:
- "Not Started" or "Open" → Phase = "Backlog"
- "In Progress" or "In Review" → Phase = "Active"
- "Blocked" or "On Hold" → Phase = "Paused"
- "Complete" or "Cancelled" → Phase = "Closed"
Quick Reference: Prompt Checklist
Before deploying, verify your prompt has:
☑️ Clear role — Who is this agent? (triage assistant, risk monitor, etc.)
☑️ Specific trigger context — What event activates this? (new task, status change, etc.)
☑️ Explicit rules — What exactly should happen? (if X then Y)
☑️ Examples — Concrete cases the agent can learn from.
☑️ Fallback behavior — What to do when rules don't apply.
☑️ No jargon — Or jargon is explained.
Getting Help
Agent not firing?
- Is it toggled ON in Space Settings?
- Does the trigger match what actually happened?
Agent doing the wrong thing?
- Check the Activity Log for reasoning.
- Test in Playground with the actual item.
- Make instructions more specific.
Need inspiration?
- Start with built-in templates (Risk Reporter, Intake, Triage).
- Ask in the Wrike community.
Thanks for reading 😁 If you have any questions or feedback, let us know in the comments below 👇