Skip to main content
AI Search runs query the enabled platforms with your configured prompts to measure presence, citations, mentions, and competitive position. This guide explains what happens during a run and how to think about the results.

How Runs Are Scheduled

AI Search runs are scheduled automatically by DevTune. You do not need to manually trigger them or set up your own job schedule. In practice, once a project is configured, DevTune handles the recurring execution and stores the results for trend analysis.

What Happens During a Run

Each run follows a consistent flow.

1. Query submission

DevTune sends each configured prompt to every enabled AI Search platform.

2. Response collection

The platform response is captured, including:
  • Main answer text
  • Cited URLs and sources where available
  • Additional structured response context that the provider exposes

3. Analysis

Each response is analyzed against your project configuration:
  • Tracked-URL matching
  • Brand-term matching
  • Competitor presence detection
  • Citation/source recovery
  • Sentiment and placement analysis

4. Metric calculation

From that analysis, DevTune updates:
  • Overall Presence Rate
  • Share of Voice
  • Sentiment Score
  • Prompt-level results
  • Citation-level results
  • Platform-level breakdowns

5. Recommendation generation

The resulting signal set can then feed:
  • Suggested actions
  • Brief generation
  • Competitive gap analysis
  • Outcome tracking over time

What Affects Run Scope

Run scope depends mainly on:
  • Number of prompts
  • Number of enabled platforms
  • Your current plan
This is why prompt selection and platform selection matter operationally, not just analytically.

Viewing Results

After runs complete, the results appear across the AI Search workspace:
  • Dashboard - Top-level KPI and trend context
  • Prompts - Prompt-level analysis and response detail
  • Citations - URL-level citation detail
  • Competitors - Competitive citation landscape
  • Analytics - Filtered charts and trend views

Understanding Variability

AI Search results are not perfectly deterministic. Some response-to-response variability is normal. That means:
  • single responses are useful examples
  • repeated runs are what make the trend credible
Focus on patterns across multiple runs rather than any one isolated answer.

Troubleshooting

No results appearing

If you do not see results:
  • Confirm you have prompts configured
  • Confirm at least one platform is enabled
  • Confirm your brands, tracked URLs, and brand terms are present

Presence seems unexpectedly low

If presence is lower than expected:
  • Review tracked URLs for completeness
  • Review brand terms for missing variations
  • Check whether you are filtering too narrowly by date or platform

Runs seem sparse

If results are not appearing as often as expected:
  • Verify the project is fully configured
  • Verify the account has an active qualifying plan
  • Remember that execution is system-managed rather than user-triggered

Next Steps