Prompt Engineering

Better Prompts. Better Code.

The quality of AI-generated code depends almost entirely on what you ask for and how you ask for it. This guide covers the patterns that consistently produce better results with Claude Code.

From task scoping to system prompts, from iterative refinement to role-specific instructions. Practical techniques you can apply in your next coding session.

Why Prompt Engineering Matters for Code

When you ask a human developer to 'build a login page', they ask clarifying questions: which auth provider? What fields? Should it handle OAuth? Error messages? Loading states?

AI coding agents will try to answer all those questions themselves. Sometimes they guess right. Often they guess something reasonable but not what you wanted. The gap between 'reasonable' and 'exactly right' is what prompt engineering closes.

Good prompts don't need to be long. They need to be specific about the things that matter and silent about the things that don't. That balance is what this guide is about.

Five Core Principles

Patterns that improve results regardless of the task.

1

Be Specific About Outcomes, Not Steps

Instead of 'create a React component, then add state, then add styling', describe the end result: 'Build a collapsible sidebar that shows project names, supports drag-to-reorder, and uses our existing Tailwind theme.' Let the agent decide how to get there.

Avoid

Create a component. Add useState. Add a toggle button. Style it with Tailwind.

Better

Build a collapsible sidebar component that lists projects by name. It should support drag-to-reorder and match our dark theme (bg-[#1a1a2e], border-[#2a2a40]). Collapsed state should persist across page reloads.

2

Scope the Work Clearly

Agents work best when they know the boundaries. Specify which files to touch (or avoid), which patterns to follow, and what 'done' looks like. Unbounded tasks lead to sprawling changes that are hard to review.

Avoid

Refactor the authentication system.

Better

Refactor the login handler in src/api/auth/login.ts to use bcrypt instead of sha256 for password hashing. Don't change the JWT logic or the session cookie handling. Update the related tests in tests/auth/.

3

Provide Context That Matters

Claude Code can read your project files, but it can't read your mind. If there's a convention you follow, a library you prefer, or a pattern you've already established, say so. This saves rework.

Avoid

Add form validation.

Better

Add form validation to the signup form using zod (we already use it for the settings form in src/forms/settings.ts). Show inline error messages below each field. Follow the same error styling as the login form.

4

One Task per Prompt

Compound prompts ('build the API, write tests, update the docs, and deploy') force the agent to hold too many goals at once. Break complex work into sequential, focused tasks. Each one builds on the previous result.

Avoid

Build the user profile page, write API endpoints, add tests, update the README, and fix the nav bar while you're at it.

Better

Build the GET /api/user/profile endpoint. Return id, name, email, and plan fields. Use the existing auth middleware for authentication.

5

Iterate, Don't Restart

If the first result isn't right, refine the prompt rather than starting over. Claude retains the full conversation context. Say what's wrong, what to change, and what to keep. Iteration is faster than reinvention.

Avoid

That's wrong. Start over and build the component differently.

Better

The layout is good but the mobile breakpoint is wrong. Below 768px, stack the cards vertically instead of using a grid. Keep everything else as is.

System Prompts: Context That Persists

System prompts set the baseline behavior for an agent before you say anything. They're the most underused tool in AI coding.

A system prompt tells the agent who it is, what it should focus on, and what it should avoid. It applies to every message in the session. Think of it as the agent's job description.

AgentsRoom ships with 14 role-specific system prompts: one for each agent type. The Frontend agent's prompt tells it to focus on components, accessibility, and responsive design. The QA agent's prompt tells it to think about edge cases and write comprehensive tests. You can customize these or write your own.

Example: Frontend Agent System Prompt

You are a senior frontend developer. Focus on React components, CSS/Tailwind styling, accessibility (WCAG AA), and responsive design. Use the project's existing component library before creating new components. Prefer composition over inheritance. Write semantic HTML. Never modify backend files.

Writing Effective System Prompts

  • Define the role and its boundaries. What should the agent focus on? What should it ignore?
  • Mention specific technologies and versions. 'React 19 with Server Components' is better than 'modern React'.
  • Reference project conventions. 'Use Zustand for state' tells the agent not to reach for Redux or Context.
  • Set quality expectations. 'Write TypeScript with strict mode, no any types' prevents shortcuts.
  • Include negative constraints. 'Never modify files in /api/' keeps the agent in its lane.

CLAUDE.md: Project-Level Context

The most effective prompt isn't typed into a chat. It lives in your repository.

CLAUDE.md is a markdown file at the root of your project that Claude Code reads automatically. It contains project structure, conventions, stack details, and guidelines that apply to every agent session in the project.

Instead of repeating 'we use Tailwind CSS 4, Prisma ORM, and Next.js 16' in every conversation, write it once in CLAUDE.md. Every agent inherits this context. AgentsRoom includes a built-in editor for CLAUDE.md so you can update it without leaving the app.

A well-written CLAUDE.md is worth more than dozens of carefully crafted individual prompts. It compounds: every session benefits from it.

Build a Prompt Library

Stop rewriting the same instructions. Save what works and reuse it.

If you find yourself typing the same kind of request across projects ('write unit tests for this file', 'refactor this to use the repository pattern', 'add error handling to all API routes'), save it as a reusable prompt.

AgentsRoom includes a prompt library feature with two levels: per-project prompts for project-specific tasks, and global prompts (cloud-synced) for patterns you use everywhere.

Good candidates for library prompts: code review checklists, test writing templates, migration scripts, component scaffolding instructions, security audit steps. Anything you'd put in a team wiki as a standard procedure.

Prompt Library Examples

Write Unit Tests

Write unit tests for [file]. Use vitest. Cover the happy path, edge cases (empty input, null, invalid types), and error handling. Mock external dependencies. Aim for >90% branch coverage.

Code Review

Review the changes in the current git diff. Check for: unused imports, missing error handling, type safety issues, potential race conditions, and naming inconsistencies. Suggest fixes for each issue found.

API Endpoint

Create a REST endpoint for [resource]. Include input validation with zod, proper error responses (400, 401, 404, 500), TypeScript types for request/response, and a JSDoc comment describing the endpoint. Follow the existing pattern in src/api/.

Advanced Patterns

Techniques for complex tasks that go beyond single prompts.

Prompt Chaining

Break a large task into ordered steps. Start the first agent with step one, wait for completion, then start the next agent with step two (referencing the output of step one). Each step is smaller and more focused. Example: Agent 1 designs the database schema, Agent 2 writes the API using that schema, Agent 3 writes tests against the API.

Cross-Agent Review

After one agent finishes, point a different agent at its output. 'Review the changes the frontend agent just made in src/components/. Check for accessibility issues and missing error states.' A fresh agent with a different role catches things the original agent missed.

Progressive Constraints

Start with a loose prompt to see how the agent approaches the problem. Then add constraints in follow-up messages: 'Good structure, but use server components instead of client components.' 'Keep the hook, but remove the useEffect and use a React Query mutation instead.' Each iteration narrows toward the solution you want.

Reference Implementation

Point the agent at existing code: 'Build a settings page following the same pattern as src/pages/profile.tsx. Same layout structure, same form handling, same error display.' This is often more effective than describing the pattern in words.

Common Mistakes

Patterns that consistently produce worse results.

Over-specifying Implementation

Telling the agent exactly which functions to write, which variables to name, and which order to implement things. This micromanagement removes the agent's ability to find a better approach. Describe the outcome, not the procedure.

No Scope Boundaries

Asking an agent to 'improve the codebase' with no constraints. Without boundaries, the agent might refactor files you didn't want touched, change APIs that other code depends on, or spend tokens on low-priority improvements.

Ignoring Existing Code

Not mentioning that a pattern, utility, or component already exists in the project. The agent will create a new one. A simple 'we already have a useAuth hook in src/hooks/' saves significant rework.

Compound Mega-Prompts

Cramming five tasks into one message. The agent will attempt all of them, but quality drops as it juggles competing goals. Split them into sequential, focused requests instead.

FAQ

How long should a coding prompt be?+
Most effective coding prompts are 2 to 5 sentences. Long enough to specify the outcome, scope, and key constraints. Short enough that the agent doesn't get lost in details. If your prompt is a full paragraph, consider whether some of that context belongs in CLAUDE.md or a system prompt instead.
Should I write prompts differently for Opus vs Sonnet?+
Slightly. Opus handles ambiguity better and can infer intent from less context. Sonnet benefits from more explicit instructions and clearer scope boundaries. For both models, specificity about the expected outcome improves results.
How does AgentsRoom help with prompt engineering?+
Three ways: built-in role-specific system prompts for each of the 14 agent types, a prompt library for saving and reusing effective prompts, and a CLAUDE.md editor for project-level context. These layers mean you spend less time crafting individual messages because the baseline context is already good.
Can I share prompts across a team?+
Yes. AgentsRoom stores prompts in two locations: project-level prompts in .agentsroom/prompts.json (version-controlled, shared via git) and personal prompts in prompts-personal.json (gitignored). Global prompts sync via the cloud across all your devices.
What's the difference between a system prompt and CLAUDE.md?+
CLAUDE.md is project context that every agent reads automatically: stack, structure, conventions. System prompts are agent-specific behavioral instructions: role, focus areas, constraints. They complement each other. CLAUDE.md says 'this project uses Next.js 16 with Prisma.' The system prompt says 'you are a backend developer focused on API routes.'

Write Better Prompts, Ship Better Code

AgentsRoom gives you system prompts, a prompt library, and CLAUDE.md editing built in. Less time crafting prompts, more time building.

Download for macOS

Requer uma assinatura Claude (Max ou Pro)