Writing good questions
Think of Solve as a research analyst. Give it context, name constraints, and define your expected output. Before research begins, Rocket may ask clarifying questions to resolve ambiguity. You can also reference previous work through @-mentions or attach files and URLs for additional context.Weak prompts vs. strong prompts
- Weak prompts
- Strong prompts
| Prompt | Why it falls short |
|---|---|
| ”Tell me about the SaaS market” | No segment, geography, or specific question |
| ”Who are our competitors?” | Solve has no context about who “we” refers to or what market you are in |
| ”What should we build?” | No context about your product, users, or constraints |
| ”Is this a good business idea?” | No specifics about the idea, market, or success criteria |
| ”Help with pricing” | No product details, target market, or competitor benchmarks |
Five ways to sharpen your prompt
Include context about your situation
Include context about your situation
Tell Solve who you are and what stage you are at. “We are a seed-stage startup with 3 engineers targeting SMBs” produces very different recommendations than the same question without that context.Useful things to include:
- Your company stage (pre-launch, seed, Series A, and so on)
- Your target customer (segment, size, industry)
- Relevant constraints (budget, timeline, team size)
- What you have already tried or decided
Name names
Name names
Instead of “analyze the competition,” name the 3 to 5 competitors you care about. Instead of “research the market,” name the specific category. Specificity turns a broad survey into a focused analysis.Instead of: “What are the trends in AI?”Try: “What are the top 3 trends in AI-powered B2B sales tools for 2025? Focus on products like Gong, Outreach, and Salesloft.”
State the purpose of the answer
State the purpose of the answer
A market size estimate for a pitch deck needs different precision than one for internal planning. Tell Solve what the output is for and it will calibrate accordingly.Instead of: “What is the market size for project management tools?”Try: “I need a TAM, SAM, and SOM estimate for project management tools targeting remote-first companies, formatted for a Series A pitch deck.”
Request specific formats and frameworks
Request specific formats and frameworks
If you want a SWOT analysis, say so. If you want a comparison table, ask for one. Explicit format requests prevent Solve from guessing your intent.Useful format requests:
- “Create a feature comparison table”
- “Use a RICE scoring framework”
- “Structure this as a two-page investor memo”
- “Give me a pros and cons list for each option”
- “Present the bull case and bear case separately”
Set boundaries on scope
Set boundaries on scope
Without boundaries, Solve may go too broad or too narrow. State what to include and what to skip.Instead of: “Analyze Stripe”Try: “Analyze Stripe’s pricing model and payment processing fees. Do not cover their banking or identity products.”
Follow-up patterns
Use follow-up messages to extract more value from the initial report. Three patterns work well: drill-down, challenge, and pivot.| Pattern | When to use | Example sequence |
|---|---|---|
| Drill-down | You want more detail on one section of the report | 1. “Map the competitive landscape for no-code database tools.” 2. “Tell me more about Airtable’s enterprise strategy. What features do they gate behind enterprise pricing, and how does that compare to Notion?” 3. “If I were building a no-code database for agencies, what 3 features would differentiate me from Airtable?” |
| Challenge | You want to stress-test the conclusions | 1. “Build an investment thesis for vertical SaaS in healthcare.” 2. “What is the strongest bear case against this thesis? What would make this investment fail?” 3. “Given those risks, how would you modify the thesis to account for regulatory headwinds?” |
| Pivot | An unexpected finding redirects your research | 1. “What is the market size for AI tutoring tools for K-12 students?” 2. The report reveals the fastest-growing segment is corporate training. 3. “Pivot to the corporate training market. What does the AI-powered training market look like for companies with 500 or more employees?” |
Iteration strategies
Start broad, go narrow
Start broad, go narrow
Begin with a landscape question such as “What does the market look like?” then narrow to specifics. This gives you context before committing to a direction.
Research, then decide
Research, then decide
Ask factual questions first (“What pricing models do competitors use?”) then ask decision questions (“Which model fits my constraints?”). Separating research from decisions produces better answers for both.
Analyze, then synthesize
Analyze, then synthesize
Run separate Solve tasks on different aspects of a problem: one for market analysis, one for competitive research, one for pricing. Then ask a final task to synthesize: “Combine these findings into a go-to-market strategy.” Tasks in the same project can reference each other with @-mentions.
Form your own view before sharing
Form your own view before sharing
Get the initial analysis yourself and form your own perspective before sharing the report with your team. This prevents the group from anchoring on whatever Solve said first.
Combining Solve with other capabilities
Solve works well alongside Build and Intelligence.| Combination | How it works |
|---|---|
| Solve then Build | Research a market or feature set with Solve, then use those findings to scope a Build task. For example: “Build a pricing page based on the competitive analysis from my Solve task.” |
| Solve then Intelligence | Use Solve to identify key competitors, then set up an Intelligence task to monitor them continuously. |
| Intelligence then Solve | When Intelligence surfaces a significant change, such as a competitor launching a new feature, create a Solve task to analyze the implications. |
| Solve then Solve | Chain multiple tasks in a project: market analysis first, then competitive teardown, then pricing strategy. Each task can build on previous findings. |
All tasks within a project share the same context. You can @-mention a previous task to bring its findings into any new Solve task without re-entering data.
Common mistakes
| Mistake | What to do instead |
|---|---|
| Asking multiple unrelated questions in one prompt | Split into separate Solve tasks, each focused on one topic |
| Not providing enough context | Include your stage, audience, constraints, and what you already know |
| Taking the first result as final | Use follow-ups to challenge, refine, and deepen the analysis |
| Ignoring the methodology section | Read how Solve arrived at its conclusions so you can evaluate confidence |
| Using Solve when Intelligence is better | If you need ongoing monitoring, set up an Intelligence task instead of re-running Solve |
What’s next
Quick start
Put these practices into action with a guided walkthrough.
Use cases
Browse example prompts across nine research categories.

