Apply prompting strategies
A unified guide to writing clear, adaptable prompts and choosing the right prompting approach for every task in Rocket.
This guide combines two essential aspects of prompting in Rocket:
- The C.L.E.A.R. framework for writing strong, effective prompts.
- Core prompting strategies like zero-shot, one-shot, few-shot, and chain-of-thought.
These tools help you go from simple tasks to deeply structured conversations - faster, smarter, and more reliably.
Why prompting principles matter
A prompt isn’t just a request - it’s your instruction manual for Rocket’s AI.
Strong prompts give you better structure, smarter behavior, and fewer surprises.
This guide introduces two core models that will level up how you prompt:
- The C.L.E.A.R. framework for writing high-quality prompts.
- A set of universal prompting strategies, including zero-shot, few-shot, one-shot, and chain-of-thought prompting.
The C.L.E.A.R. Framework
This framework outlines five qualities that make your prompt easier for Rocket to interpret and dramatically improves the quality of your output.
To show how this works in practice, we’ll use one example prompt across all five stages:
Goal: Create a user registration form.
1. Concise
Be brief but clear. Focus on what matters, and cut unnecessary wording.
Rocket parses prompts instantly. Extra words don’t help - it’s clarity that counts.
2. Logical
Group related elements. Organize the layout like a user would experience it.
Prompting out of sequence often leads to unexpected layouts or confusing flows.
Key term: Layout refers to how interface components are visually arranged. Logical order helps Rocket structure your app cleanly.
3. Explicit
Be specific about what you want to appear and how it should behave.
Don’t leave details up to interpretation. The more precisely you describe the behavior, the more accurate the output.
Prompting works best when you imagine you’re writing instructions for a builder. Specific requests = precise results.
4. Adaptive
Tailor your prompt to the current goal - whether it’s design, logic, or debugging.
Being too general makes it harder for Rocket to know what to prioritize - layout, logic, or both.
Key term: Logic describes how elements behave - validation, interactions, and flows.
5. Reflective
Don’t settle for your first draft. Reflective prompting means reviewing, adjusting, or rephrasing based on results or what you learned from a previous attempt.
Do:
Tried a version that felt too vague, so you rewrote it:
Great prompting is iterative. If the output isn’t quite right, don’t just edit the result, rethink the instruction.
Reflection is what turns decent prompts into excellent ones. If the result wasn’t what you expected, revise the input - not just the app.
Prompting Strategies
Different prompts unlock different outcomes. Whether you’re exploring, instructing, or troubleshooting, the right strategy will get you there faster.
Zero-shot prompting
How it works:
You give Rocket one instruction, with no examples or setup. It uses context and training to respond.
Best used for:
- Simple tasks
- Common UI or logic
- Fast one-off builds
One-shot prompting
How it works:
You give a single example to set expectations, then follow up with your actual task.
Best used for:
- Showing structure or style
- Setting format once for Rocket to follow
- Lightweight customization
Few-shot prompting
How it works:
You show Rocket a few examples so it learns a pattern - then ask it to follow that structure.
Best used for:
- Repeating layout patterns
- Matching tone or behavior
- Consistent UX or logic styles
Chain-of-thought prompting
How it works:
You guide Rocket to reason step-by-step before giving the final output.
Best used for:
- Logic-heavy prompts
- Multi-step flows
- Debugging and conditional logic
Chain-of-thought isn’t just for solving problems - it’s for teaching Rocket how to think through them.
Preventing hallucinations and inconsistencies
Even well-written prompts can generate off-track results. Here’s how to reduce drift, missing logic, or features you didn’t ask for.
1. Be specific about source data and constraints
Specific nouns (fields, collections, actions) reduce ambiguity and keep Rocket grounded.
2. Sequence complex logic into steps
Step-by-step instructions reduce skipped behavior and are easier to test.
3. Reflect and revise before re-prompting
If the output isn’t what you expect, don’t just rephrase - step back and clarify what Rocket misunderstood.
Review this screen’s logic. What assumptions did Rocket make that aren’t valid?
The more complex the task, the more your prompt becomes a design brief. Revising the instruction is often more effective than editing the output.